Many physicians worship evidence-based medicine, which is generally a very good practice that ensures that medical practice isn’t dictated by our biases and assumptions but rather on interventions that we have proven show benefits that exceed their anticipated risks. The optimal way to confirm value of an intervention is by a prospective randomized clinical trial, which is when we enroll a population of patients with the same general disease condition, ideally with no or few other significant medical issues that might cloud the interpretation of what the study intervention is doing, and then randomize them to either pursue what we’d consider the best current practice or an experimental approach that might be better but might also be no better, or even worse. Other clinical trials may be smaller and may not randomize patients, but they all share a specific list of enrollment criteria, both things that are required in the potential participants (inclusion criteria) and other things that are specifically forbidden (exclusion criteria) to ensure that the study group is reasonably homogeneous.
Cancer treatments are essentially invariably approved on the basis of prospective clinical trials, most often randomized phase 3 trials (large studies of standard vs. investigational strategy), and this gives us confidence that these new approaches are effective. Aside from the problem that this denies patients who are older and potentially sicker from the opportunity to participate in trials and receive new treatments that may be beneficial, this approach also leaves us wondering whether the treatments proven to help clinical trial candidates are as safe and beneficial for the teeming masses of other people who have complicating issues that would have disqualified them from the trials with such patients. For example, the trial comparing the immunotherapy Keytruda (pembrolizumab) to standard combination chemotherapy as first line treatment for patients with high level expression of the protein PD-L1 on their tumor cells (a biomarker associated with a higher chance of benefit from immunotherapy) excluded patients who are unable to work or are in bed >50% of the time (performance status of 2 or higher — see here for more discussion), which is a significant minority of patients with advanced non-small cell lung cancer). We might presume that more frail patients with high PD-L1 expression also do very well with Keytruda, but we can’t know that from the evidence we have. Because sicker patients don’t have as strong an immune system as more fit patients, immunotherapy may not be as effective as we’d hope. In fact, my limited experience of treating frail patients with Keytruda or other, similar immunotherapy agents has been quite disappointing, even when the patient has high PD-L1 expression that would lead us to be very hopeful of an excellent chance to respond well.
In addition to the uncertainty of whether the results seen in a relatively narrow population of the more fit and often younger patients on clinical trials apply to a broader mix of real cancer patients, some of the cancer treatments may have challenging side effects that could be harder to manage in community settings. Are trials with these agents more successful than would happen in the broader world experience where there isn’t the same level of support and experience as at the specialty cancer centers?
Because of these limitations, that the patients and the treatment settings of clinical trials may not represent the broader range of real life experience, it can be extremely helpful to review data from clinical settings outside of prospective trials. These can have many forms. A registry may be formed by a company to record the outcomes of patients treated with a particular drug or combination. A large treating center or network may do a retrospective study, looking through its own records of how patients treated in a certain way actually did in aggregate. There are even very large databases such as the National Cancer Database (NCDB) and the Surveillance, Epidemiology, and End Results (SEER) database that aggregate limited information from many thousands of patients at a time from all over.
Retrospective data have significant limitations. Patients may have inaccurate staging, may have other medical issues, may be frail, etc. Patients are assigned treatments not based on a controlled randomization process but based on the recommendations of their doctors and their own preferences. A patient who didn’t undergo post-operative chemotherapy after surgery for lung cancer or colon cancer may be healthy and have elected to not pursue it, or they may have not received chemotherapy because they felt too sick or their physician felt it was too dangerous because of their other medical problems. Pooling data from a heterogeneous population introduces many additional variables compared with controlled prospective clinical trials. On the other hand, it can be a critical validation to see that the benefits shown in a controlled study of a narrowly defined population also follow the same pattern when we look at outcomes in a broad real world population.
Prospective trials provide a valuable line of evidence in shaping our understanding and best practices in cancer care, but retrospective studies of large real world populations can be a critical way to complement clinical studies and reassure us that the results apply to the more heterogeneous patient populations we regularly see and treat, many of whom would not be eligible for clinical trials.
I’ll write a few posts about retrospective results that provided valuable insights in the near future.