People: Principal Investigator

Associate Professor (Research) of Medicine (Biomedical Informatics), of Biomedical Data Science and of Surgery

Bio

Dr Hernandez-Boussard is an Associate Professor in Medicine (Biomedical Informatics), Biomedical Data Science, and Surgery at the Stanford University School of Medicine. Dr. Hernandez-Boussard's background and expertise is in the field of computational biology, with concentration on accountability measures, population health, and health policy. A key focus of her research is the application of novel methods and tools to large clinical datasets for hypothesis generation, comparative effectiveness research, and the evaluation of quality healthcare delivery.

Publications

  • Real world evidence in cardiovascular medicine: assuring data validity in electronic health record-based studies. Journal of the American Medical Informatics Association : JAMIA Hernandez-Boussard, T., Monda, K. L., Crespo, B. C., Riskin, D. 2019

    Abstract

    OBJECTIVE: With growing availability of digital health data and technology, health-related studies are increasingly augmented or implemented using real world data (RWD). Recent federal initiatives promote the use of RWD to make clinical assertions that influence regulatory decision-making. Our objective was to determine whether traditional real world evidence (RWE) techniques in cardiovascular medicine achieve accuracy sufficient for credible clinical assertions, also known as "regulatory-grade" RWE.DESIGN: Retrospective observational study using electronic health records (EHR), 2010-2016.METHODS: A predefined set of clinical concepts was extracted from EHR structured (EHR-S) and unstructured (EHR-U) data using traditional query techniques and artificial intelligence (AI) technologies, respectively. Performance was evaluated against manually annotated cohorts using standard metrics. Accuracy was compared to pre-defined criteria for regulatory-grade. Differences in accuracy were compared using Chi-square test.RESULTS: The dataset included 10840 clinical notes. Individual concept occurrence ranged from 194 for coronary artery bypass graft to 4502 for diabetes mellitus. In EHR-S, average recall and precision were 51.7% and 98.3%, respectively and 95.5% and 95.3% in EHR-U, respectively. For each clinical concept, EHR-S accuracy was below regulatory-grade, while EHR-U met or exceeded criteria, with the exception of medications.CONCLUSIONS: Identifying an appropriate RWE approach is dependent on cohorts studied and accuracy required. In this study, recall varied greatly between EHR-S and EHR-U. Overall, EHR-S did not meet regulatory grade criteria, while EHR-U did. These results suggest that recall should be routinely measured in EHR-based studes intended for regulatory use. Furthermore, advanced data and technologies may be required to achieve regulatory grade results.

    View details for DOI 10.1093/jamia/ocz119

    View details for PubMedID 31414700

  • Comparison of Orthogonal NLP Methods for Clinical Phenotyping and Assessment of Bone Scan Utilization among Prostate Cancer Patients. Journal of biomedical informatics Coquet, J., Bozkurt, S., Kan, K. M., Ferrari, M. K., Blayney, D. W., Brooks, J. D., Hernandez-Boussard, T. 2019: 103184

    Abstract

    Clinical care guidelines recommend that newly diagnosed prostate cancer patients at high risk for metastatic spread receive a bone scan prior to treatment and that low risk patients not receive it. The objective was to develop an automated pipeline to interrogate heterogeneous data to evaluate the use of bone scans using a two different Natural Language Processing (NLP) approaches.Our cohort was divided into risk groups based on Electronic Health Records (EHR). Information on bone scan utilization was identified in both structured data and free text from clinical notes. Our pipeline annotated sentences with a combination of a rule-based method using the ConText algorithm (a generalization of NegEx) and a Convolutional Neural Network (CNN) method using word2vec to produce word embeddings.A total of 5,500 patients and 369,764 notes were included in the study. A total of 39% of patients were high-risk and 73% of these received a bone scan; of the 18% low risk patients, 10% received one. The accuracy of CNN model outperformed the rule-based model one (F-measure = 0.918 and 0.897 respectively). We demonstrate a combination of both models could maximize precision or recall, based on the study question.Using structured data, we accurately classified patients' cancer risk group, identified bone scan documentation with two NLP methods, and evaluated guideline adherence. Our pipeline can be used to provide concrete feedback to clinicians and guide treatment decisions.

    View details for PubMedID 31014980

  • Predicting inadequate postoperative pain management in depressed patients: A machine learning approach. PloS one Parthipan, A., Banerjee, I., Humphreys, K., Asch, S. M., Curtin, C., Carroll, I., Hernandez-Boussard, T. 2019; 14 (2): e0210575

    Abstract

    Widely-prescribed prodrug opioids (e.g., hydrocodone) require conversion by liver enzyme CYP-2D6 to exert their analgesic effects. The most commonly prescribed antidepressant, selective serotonin reuptake inhibitors (SSRIs), inhibits CYP-2D6 activity and therefore may reduce the effectiveness of prodrug opioids. We used a machine learning approach to identify patients prescribed a combination of SSRIs and prodrug opioids postoperatively and to examine the effect of this combination on postoperative pain control. Using EHR data from an academic medical center, we identified patients receiving surgery over a 9-year period. We developed and validated natural language processing (NLP) algorithms to extract depression-related information (diagnosis, SSRI use, symptoms) from structured and unstructured data elements. The primary outcome was the difference between preoperative pain score and postoperative pain at discharge, 3-week and 8-week time points. We developed computational models to predict the increase or decrease in the postoperative pain across the 3 time points by using the patient's EHR data (e.g. medications, vitals, demographics) captured before surgery. We evaluate the generalizability of the model using 10-fold cross-validation method where the holdout test method is repeated 10 times and mean area-under-the-curve (AUC) is considered as evaluation metrics for the prediction performance. We identified 4,306 surgical patients with symptoms of depression. A total of 14.1% were prescribed both an SSRI and a prodrug opioid, 29.4% were prescribed an SSRI and a non-prodrug opioid, 18.6% were prescribed a prodrug opioid but were not on SSRIs, and 37.5% were prescribed a non-prodrug opioid but were not on SSRIs. Our NLP algorithm identified depression with a F1 score of 0.95 against manual annotation of 300 randomly sampled clinical notes. On average, patients receiving prodrug opioids had lower average pain scores (p<0.05), with the exception of the SSRI+ group at 3-weeks postoperative follow-up. However, SSRI+/Prodrug+ had significantly worse pain control at discharge, 3 and 8-week follow-up (p < .01) compared to SSRI+/Prodrug- patients, whereas there was no difference in pain control among the SSRI- patients by prodrug opioid (p>0.05). The machine learning algorithm accurately predicted the increase or decrease of the discharge, 3-week and 8-week follow-up pain scores when compared to the pre-operative pain score using 10-fold cross validation (mean area under the receiver operating characteristic curve 0.87, 0.81, and 0.69, respectively). Preoperative pain, surgery type, and opioid tolerance were the strongest predictors of postoperative pain control. We provide the first direct clinical evidence that the known ability of SSRIs to inhibit prodrug opioid effectiveness is associated with worse pain control among depressed patients. Current prescribing patterns indicate that prescribers may not account for this interaction when choosing an opioid. The study results imply that prescribers might instead choose direct acting opioids (e.g. oxycodone or morphine) in depressed patients on SSRIs.

    View details for PubMedID 30726237

  • Extracting Patient-Centered Outcomes from Clinical Notes in Electronic Health Records: Assessment of Urinary Incontinence After Radical Prostatectomy. EGEMS (Washington, DC) Gori, D., Banerjee, I., Chung, B. I., Ferrari, M., Rucci, P., Blayney, D. W., Brooks, J. D., Hernandez-Boussard, T. 2019; 7 (1): 43

    Abstract

    Objective: To assess documentation of urinary incontinence (UI) in prostatectomy patients using unstructured clinical notes from Electronic Health Records (EHRs).Methods: We developed a weakly-supervised natural language processing tool to extract assessments, as recorded in unstructured text notes, of UI before and after radical prostatectomy in a single academic practice across multiple clinicians. Validation was carried out using a subset of patients who completed EPIC-26 surveys before and after surgery. The prevalence of UI as assessed by EHR and EPIC-26 was compared using repeated-measures ANOVA. The agreement of reported UI between EHR and EPIC-26 was evaluated using Cohen's Kappa coefficient.Results: A total of 4870 patients and 716 surveys were included. Preoperative prevalence of UI was 12.7 percent. Postoperative prevalence was 71.8 percent at 3 months, 50.2 percent at 6 months and 34.4 and 41.8 at 12 and 24 months, respectively. Similar rates were recorded by physicians in the EHR, particularly for early follow-up. For all time points, the agreement between EPIC-26 and the EHR was moderate (all p < 0.001) and ranged from 86.7 percent agreement at baseline (Kappa = 0.48) to 76.4 percent agreement at 24 months postoperative (Kappa = 0.047).Conclusions: We have developed a tool to assess documentation of UI after prostatectomy using EHR clinical notes. Our results suggest such a tool can facilitate unbiased measurement of important PCOs using real-word data, which are routinely recorded in EHR unstructured clinician notes. Integrating PCO information into clinical decision support can help guide shared treatment decisions and promote patient-valued care.

    View details for DOI 10.5334/egems.297

    View details for PubMedID 31497615

Academic Appointments

Associate Professor, Medicine - Biomedical Informatics Research 

Associate Professor, Biomedical Data Science

Associate Professor, Surgery - General Surgery

Member, Stanford Cancer Institute

Professional Education

M.S., Stanford University, Health Services Research (2013)

Ph.D., University Claude Bernard, Lyon 1, Computational Biology (1999)

M.P.H., Yale University, Epidemiology (1993)

B.A., University California, Irvine, Psychology (1991)

B.S., University of California, Irvine, Biology (1991)