Article Text

Download PDFPDF

Intention to treat analysis in clinical trials when there are missing data
Free
  1. David Streiner, PHD,
  2. John Geddes, MD
  1. Editors, Evidence-Based Mental Health

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The problem

If you open any book about research design or statistics, you will see examples of studies in which 40 or 50 people are randomly assigned to a treatment and a control condition, tested at baseline, and then retested every week for the next 2 months. These texts then extol the virtues of being able to look at changes over time, and how the groups can be compared with regard to the pattern of change. Buried in a footnote (if it's mentioned at all) is a caveat that, in order to perform the analysis, we need complete data on all subjects. If even 1 data point is missing for a person, that subject must be dropped from consideration. Unfortunately, trial participants rarely read our textbooks, and do not know the importance of complete data. Consequently, they sometimes have second thoughts about continuing with the research (perhaps because of adverse effects); they may get tired of the necessity to complete a questionnaire a number of times or to come into the clinic on a weekly basis in order to have blood drawn; they may move out of town; or they may, of course, die before the study ends.

Whenever follow up data are incomplete, the researcher faces some major problems. Firstly, because fewer subjects have complete data than was originally planned for, the study may be under powered; that is, not have enough subjects in order to show that the difference between the groups is statistically significant, even though it may be clinically important.1 Secondly, it is a dictum of research that people do not drop out of studies for trivial reasons. Those who do not complete a trial of a new treatment for depression, for example, may be those who (a) improved the most, and don't see the necessity of continuing; (b) improved the least, and see no reason to continue to comply with a programme that isn't working for them; or (c) may have become so depressed that they committed suicide. If the majority of dropouts are those who improved, then this will serve to make the interventions appear less effective than they actually are. Conversely, if most of the people dropped out because the new treatment was ineffective, it will, paradoxically, make the intervention look better, because many of the non-responders are no longer in that arm of the study. It is generally accepted, therefore, that the most clinically informative, as well as the most statistically robust, method of analysis is an intention to treat (ITT) analysis, which includes all randomised study participants in the groups to which they were randomised. But how do you do an ITT analysis if an appreciable number of patients have dropped out?

Ways of dealing with missing data

WORST CASE SCENARIO

If the study has a dichotomous outcome (eg, readmission, >50% decrease in score on a depression inventory), then one approach is to assume that all of the dropouts in the experimental group did poorly, and all those in the control group did well. This is the most conservative outcome, so if the experimental group is superior, then we can be sure that the results are not due to dropouts. However, there are 3 problems with this tactic. Firstly, it may underestimate the true magnitude of the effect. Secondly, if the results are negative, we do not know if it is because the new treatment truly was ineffective, or is a consequence of the dropouts. Finally, it ignores all of the data except for the final assessment.

LAST OBSERVATION CARRIED FORWARD

One method to minimise these problems is called the Last Observation Carried Forward (LOCF). LOCF is very commonly used in drug trials. Let's assume that there are 8 weekly assessments after the baseline observation. If a patient drops out of the study after the third week, then this value is “carried forward” and assumed to be his or her score for the 5 missing data points. The assumption is that patients improve gradually from the start of the study until the end, so that carrying forward an intermediate value is a conservative estimate of how well the person would have done had he or she remained in the study. The advantages to this approach are that (a) it minimises the number of subjects who are eliminated from the analysis; and (b) it allows the analysis to examine trends over time, rather than focusing simply on the endpoint. However, it suffers from 2 other, equally troubling, shortcomings. Firstly, it assumes that no improvement will occur outside of treatment, and thus ignores the natural history of some disorders. Secondly, it ignores the “trajectory” of the change prior to the final value; that is, it does not take into account that some dropouts may have shown no change up to and including the last assessment, while others may have been improving (or getting worse). On balance, though, it is an improvement over eliminating subjects from the analysis.

GROWTH CURVE ANALYSIS

Within the last decade, a new approach has emerged, usually called Growth Curve Analysis.2 As the name implies, it calculates the trajectory of change over time for each person based on whatever data are available (as long as there are at least 2 measurements), and then estimates (or “imputes”) what the missing data would be if the person followed on the same trajectory. This also means that it can estimate a value that is missing in the middle because the person skipped 1 or 2 evaluation sessions, but actually completed the study. This method requires considerable statistical sophistication (and powerful computer programs), however, so that it is only now finding its way into psychiatric research.

Conclusions

The main drawback of all methods of estimating outcomes in the absence of actual data is that they introduce uncertainty about what really happened to the patient. The degree of uncertainty may be acceptable if only a few patients drop out, but it becomes increasingly problematic as the proportion grows. For example, in the pre-registration randomised trials of atypical antipsychotics, the dropout rate was often greater than 50% (see “Review: the benefit of atypical antipsychotics over standard drugs disappears after controlling for comparator dose” on 77 of this issue).3 This leads to great uncertainty about the reliability of the quantitative results. In EBMH, we only abstract trials that have an endpoint assessment for at least 80% of patients.

In general, a more helpful approach would be for those conducting trials to distinguish dropout from treatment from loss to follow up; patients who are no longer participating in the trial can often still be followed up. This would ensure that outcome data from a high proportion of patients are included in the analysis.

References