Bio

Stanford Advisors


Publications

All Publications


  • Toward Assessing Clinical Trial Publications for Reporting Transparency. Journal of biomedical informatics Kilicoglu, H., Rosemblat, G., Hoang, L., Wadhwa, S., Peng, Z., Malicki, M., Schneider, J., Riet, G. T. 2021: 103717

    Abstract

    OBJECTIVE: To annotate a corpus of randomized controlled trial (RCT) publications with the checklist items of CONSORT reporting guidelines and using the corpus to develop text mining methods for RCT appraisal.METHODS: We annotated a corpus of 50 RCT articles at the sentence level using 37 fine-grained CONSORT checklist items. A subset (31 articles) was double-annotated and adjudicated, while 19 were annotated by a single annotator and reconciled by another. We calculated inter-annotator agreement at the article and section level using MASI (Measuring Agreement on Set-Valued Items) and at the CONSORT item level using Krippendorff's alpha. We experimented with two rule-based methods (phrase-based and section header-based) and two supervised learning approaches (support vector machine and BioBERT-based neural network classifiers), for recognizing 17 methodology-related items in the RCT Methods sections.RESULTS: We created CONSORT-TM consisting of 10,709 sentences, 4,845 (45%) of which were annotated with 5,246 labels. A median of 28 CONSORT items (out of possible 37) were annotated per article. Agreement was moderate at the article and section levels (average MASI: 0.60 and 0.64, respectively). Agreement varied considerably among individual checklist items (Krippendorff's alpha= 0.06-0.96). The model based on BioBERT performed best overall for recognizing methodology-related items (micro-precision: 0.82, micro-recall: 0.63, micro-F1: 0.71). Combining models using majority vote and label aggregation further improved precision and recall, respectively.CONCLUSION: Our annotated corpus, CONSORT-TM, contains more fine-grained information than earlier RCT corpora. Low frequency of some CONSORT items made it difficult to train effective text mining models to recognize them. For the items commonly reported, CONSORT-TM can serve as a testbed for text mining methods that assess RCT transparency, rigor, and reliability, and support methods for peer review and authoring assistance. Minor modifications to the annotation scheme and a larger corpus could facilitate improved text mining models. CONSORT-TM is publicly available at https://github.com/kilicogluh/CONSORT-TM.

    View details for DOI 10.1016/j.jbi.2021.103717

    View details for PubMedID 33647518

  • Preprint Servers' Policies, Submission Requirements, and Transparency in Reporting and Research Integrity Recommendations. JAMA Malicki, M., Jeroncic, A., Ter Riet, G., Bouter, L. M., Ioannidis, J. P., Goodman, S. N., Aalbersberg, I. J. 2020; 324 (18): 1901?3

    View details for DOI 10.1001/jama.2020.17195

    View details for PubMedID 33170231

  • The worldwide clinical trial research response to the COVID-19 pandemic - the first 100 days. F1000Research Janiaud, P., Axfors, C., Van't Hooft, J., Saccilotto, R., Agarwal, A., Appenzeller-Herzog, C., Contopoulos-Ioannidis, D. G., Danchev, V., Dirnagl, U., Ewald, H., Gartlehner, G., Goodman, S. N., Haber, N. A., Ioannidis, A. D., Ioannidis, J. P., Lythgoe, M. P., Ma, W., Macleod, M., Malicki, M., Meerpohl, J. J., Min, Y., Moher, D., Nagavci, B., Naudet, F., Pauli-Magnus, C., O'Sullivan, J. W., Riedel, N., Roth, J. A., Sauermann, M., Schandelmaier, S., Schmitt, A. M., Speich, B., Williamson, P. R., Hemkens, L. G. 2020; 9: 1193

    Abstract

    Background: Never before have clinical trials drawn as much public attention as those testing interventions for COVID-19. We aimed to describe the worldwide COVID-19 clinical research response and its evolution over the first 100 days of the pandemic. Methods: Descriptive analysis of planned, ongoing or completed trials by April 9, 2020 testing any intervention to treat or prevent COVID-19, systematically identified in trial registries, preprint servers, and literature databases. A survey was conducted of all trials to assess their recruitment status up to July 6, 2020. Results: Most of the 689 trials (overall target sample size 396,366) were small (median sample size 120; interquartile range [IQR] 60-300) but randomized (75.8%; n=522) and were often conducted in China (51.1%; n=352) or the USA (11%; n=76). 525 trials (76.2%) planned to include 155,571 hospitalized patients, and 25 (3.6%) planned to include 96,821 health-care workers. Treatments were evaluated in 607 trials (88.1%), frequently antivirals (n=144) or antimalarials (n=112); 78 trials (11.3%) focused on prevention, including 14 vaccine trials. No trial investigated social distancing. Interventions tested in 11 trials with >5,000 participants were also tested in 169 smaller trials (median sample size 273; IQR 90-700). Hydroxychloroquine alone was investigated in 110 trials. While 414 trials (60.0%) expected completion in 2020, only 35 trials (4.1%; 3,071 participants) were completed by July 6. Of 112 trials with detailed recruitment information, 55 had recruited <20% of the targeted sample; 27 between 20-50%; and 30 over 50% (median 14.8% [IQR 2.0-62.0%]). Conclusions: The size and speed of the COVID-19 clinical trials agenda is unprecedented. However, most trials were small investigating a small fraction of treatment options. The feasibility of this research agenda is questionable, and many trials may end in futility, wasting research resources. Much better coordination is needed to respond to global health threats.

    View details for DOI 10.12688/f1000research.26707.1

    View details for PubMedID 33082937

Footer Links:

Stanford Medicine Resources: