Re-analysis of clinical trial data can change conclusions, say researchers

A re-analysis of raw data from published clinical trials sometimes leads to new conclusions about whether and which patients will benefit from a new drug or medical intervention.

- By Krista Conger

John Ioannidis and his colleagues found that the re-analysis of data from clinical trials is relatively rare.

Norbert von der Groeben

As many as one-third of previously published randomized clinical trials could be re-analyzed in ways that modify the conclusions of how many or what types of patients need to be treated, according to a new study by researchers at the Stanford University School of Medicine.

A culture that fails to encourage data sharing makes such re-analysis of the data extremely rare, the researchers said. They were able to identify only 37 published re-analyses over more than three decades of research. Of these, only five were conducted by researchers who were not associated with the original studies.

The new study was published Sept. 9 in the Journal of the American Medical Association.

“There is a real need for researchers to provide access to their raw data for others to analyze,” said John Ioannidis, MD, DSc, professor of medicine and director of the Stanford Prevention Research Center. “Without this access, and possibly incentives to perform this work, there is increasing lack of trust in whether the results of published, randomized trials are credible and can be taken at face value. The recent hot debates about whether oseltamivir works are only the tip of the iceberg in this crisis of confidence.”

Oseltamivir is an antiviral medication marketed under the trade name Tamiflu. Although it is licensed to treat influenza A and influenza B, some subsequent analyses and trials conducted after the drug was approved have suggested that its benefits do not outweigh the risks of side effects in otherwise healthy adults.

Ioannidis is the senior author of the study. Postdoctoral scholar Shanil Ebrahim, PhD, is the lead author. Ioannidis is co-director of the recently launched Meta-Research Innovation Center at Stanford, or METRICS, which aims to advance excellence in scientific research by evaluating and optimizing scientific practices. Enhancing reproducibility and data sharing could be instrumental in this regard.

Searching for data

Ebrahim and his colleagues used the MEDLINE database to conduct their study. MEDLINE is a bibliographic database maintained by the National Library of Medicine. It contains over 25 million citations of biomedical publications from roughly 5,600 journals worldwide. They searched for articles written in English describing the re-analysis of raw data used in previously published studies. Meta-analyses were excluded from the study, as were studies testing a different hypothesis than the original trial.

The researchers screened nearly 3,000 articles of potential interest and read the full text of 226. Of these, 38 were deemed eligible for their study. Two were subsequently excluded because the articles describing the original clinical trials on which they were based were unavailable, and one contained two re-analyses. Of the 37 re-analyses evaluated for the study, 32 had an overlap of at least one author from the original paper.

Thirteen of the re-analyses (35 percent of the total) came to conclusions that differed from those of the original trial with regard to who could benefit from the tested medication or intervention: Three concluded that the patient population to treat should be different than the one recommended by the original study; one concluded that fewer patients should be treated; and the remaining nine indicated that more patients should be treated.

The differences between the original trial studies and the re-analyses often occurred because the researchers conducting the re-analyses used different statistical or analytical methods, ways of defining outcomes or ways of handling missing data. Some re-analyses also identified errors in the original trial publication, such as the inclusion of patients who should have been excluded from the study.

Different conclusions

The aims of the re-analyzed studies varied widely. For example, one study on the treatment of enlarged, bleeding veins in the esophagus concluded that sclerotherapy, in which physicians use an endoscope to inject the veins with chemicals to induce blood clots, reduced mortality even though it didn’t prevent rebleeding. The re-analysis, which used a different statistical model of risk, concluded the treatment did prevent rebleeding but didn’t reduce mortality. The new conclusion suggested that the intervention would be best given to patients with rebleeding, rather than those at highest risk of death from the condition.

Another study investigated the best way to deliver a medication to stimulate the production of red blood cells in people with anemia by comparing a fixed dose administered once every three weeks with weight-based weekly dosing. In the re-analysis, the conclusion changed when investigators used an updated hemoglobin threshold level to determine when therapy should be initiated.

“The high proportion of re-analyses reaching different conclusions than the original papers may be partly an artifact,” said Ioannidis, who is also the C.F. Rehnborg Professor in Disease Prevention. “By that I mean that, in the current environment, re-analyses that reach exactly the same results as the original would have great difficulty getting published. However, making the raw data of trials available for re-analyses is essential not only for re-evaluating whether the original claims were correct, but also for using these data to perform additional analyses of interest and combined analyses.” In this way, existing raw data could be used to explore new clinical questions, and may sometimes eliminate the need to conduct new trials.

I am very much in favor of data sharing, and believe there should be incentives for independent researchers to conduct these kinds of re-analyses.

The fact that researchers conducting re-analyses often came to different conclusions doesn’t indicate the original studies were necessarily biased or deliberately falsified, Ioannidis added. Instead, it emphasizes the importance of making the original data freely available to other researchers to encourage dialogue and consensus, and to discourage a culture of scientific research that rewards scientists only for novel or unexpected results.

“I am very much in favor of data sharing, and believe there should be incentives for independent researchers to conduct these kinds of re-analyses,” said Ioannidis. “They can be extremely insightful.”

Other Stanford co-authors of the study are Kristian Thorlund, PhD, and Edward Mills, PhD, visiting associate professors at the Stanford Prevention Research Center.

The research was supported by postdoctoral awards from MITACS Elevate and SickKids Restracomp; the Canadian Institutes of Health Research Canada Chair; and METRICS, which is supported by a grant from the Laura and John Arnold Foundation.

Information about Stanford’s Department of Medicine, which also supported the work, is available at http://medicine.stanford.edu.

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.

2023 ISSUE 3

Exploring ways AI is applied to health care