Principal Investigator

'Not Her First Rodeo'

Advancing AI Research, Education, Policy, and Practice to Service the Collective Needs of Humanity

Tina Hernandez-Boussard
Professor of Medicine (Biomedical Informatics), of Biomedical Data Science, of Surgery and, by courtesy, of Epidemiology and Population Health

Bio

Dr. Hernandez-Boussard is an Associate Dean of Research and Professor of Medicine (Biomedical Informatics), Biomedical Data Sciences, Surgery and Epidemiology & Population Health (by courtesy) at Stanford University. With a rich background and vast expertise in biomedical informatics, health services research, and epidemiology, she is at the forefront of advancing healthcare through the development, evaluation and application of innovative methods. Through her research, she aims to effectively monitor, measure, and predict equitable healthcare outcomes. By leveraging real-world data, her team works diligently to construct a solid body of evidence that can significantly enhance patient outcomes, streamline healthcare delivery, and provide valuable guidance for health policy decisions. In addition, Dr. Hernandez-Boussard focuses intensively on mitigating bias and enhancing equity within artificial intelligence applications in healthcare settings. Through her research and evaluation of AI technology, she seeks to advance healthcare practices while ensuring that diverse populations receive equitable resources, care, and outcomes.

Publications

  • Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA network open Chin, M. H., Afsar-Manesh, N., Bierman, A. S., Chang, C., Colon-Rodriguez, C. J., Dullabh, P., Duran, D. G., Fair, M., Hernandez-Boussard, T., Hightower, M., Jain, A., Jordan, W. B., Konya, S., Moore, R. H., Moore, T. T., Rodriguez, R., Shaheen, G., Snyder, L. P., Srinivasan, M., Umscheid, C. A., Ohno-Machado, L. 2023; 6 (12): e2345050

    Abstract

    Importance: Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income.Objective: To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.Evidence Review: The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback.Findings: The panel developed a conceptual framework to apply guiding principles across an algorithm's life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.Conclusions and Relevance: Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.

    View details for DOI 10.1001/jamanetworkopen.2023.45050

    View details for PubMedID 38100101

  • Perceptions of Data Set Experts on Important Characteristics of Health Data Sets Ready for Machine Learning: A Qualitative Study. JAMA network open Ng, M. Y., Youssef, A., Miner, A. S., Sarellano, D., Long, J., Larson, D. B., Hernandez-Boussard, T., Langlotz, C. P. 2023; 6 (12): e2345892

    Abstract

    The lack of data quality frameworks to guide the development of artificial intelligence (AI)-ready data sets limits their usefulness for machine learning (ML) research in health care and hinders the diagnostic excellence of developed clinical AI applications for patient care.To discern what constitutes high-quality and useful data sets for health and biomedical ML research purposes according to subject matter experts.This qualitative study interviewed data set experts, particularly those who are creators and ML researchers. Semistructured interviews were conducted in English and remotely through a secure video conferencing platform between August 23, 2022, and January 5, 2023. A total of 93 experts were invited to participate. Twenty experts were enrolled and interviewed. Using purposive sampling, experts were affiliated with a diverse representation of 16 health data sets/databases across organizational sectors. Content analysis was used to evaluate survey information and thematic analysis was used to analyze interview data.Data set experts' perceptions on what makes data sets AI ready.Participants included 20 data set experts (11 [55%] men; mean [SD] age, 42 [11] years), of whom all were health data set creators, and 18 of the 20 were also ML researchers. Themes (3 main and 11 subthemes) were identified and integrated into an AI-readiness framework to show their association within the health data ecosystem. Participants partially determined the AI readiness of data sets using priority appraisal elements of accuracy, completeness, consistency, and fitness. Ethical acquisition and societal impact emerged as appraisal considerations in that participant samples have not been described to date in prior data quality frameworks. Factors that drive creation of high-quality health data sets and mitigate risks associated with data reuse in ML research were also relevant to AI readiness. The state of data availability, data quality standards, documentation, team science, and incentivization were associated with elements of AI readiness and the overall perception of data set usefulness.In this qualitative study of data set experts, participants contributed to the development of a grounded framework for AI data set quality. Data set AI readiness required the concerted appraisal of many elements and the balancing of transparency and ethical reflection against pragmatic constraints. The movement toward more reliable, relevant, and ethical AI and ML applications for patient care will inevitably require strategic updates to data set creation practices.

    View details for DOI 10.1001/jamanetworkopen.2023.45892

    View details for PubMedID 38039004

  • Promoting Equity In Clinical Decision Making: Dismantling Race-Based Medicine. Health affairs (Project Hope) Hernandez-Boussard, T., Siddique, S. M., Bierman, A. S., Hightower, M., Burstin, H. 2023; 42 (10): 1369-1373

    Abstract

    As the use of artificial intelligence has spread rapidly throughout the US health care system, concerns have been raised about racial and ethnic biases built into the algorithms that often guide clinical decision making. Race-based medicine, which relies on algorithms that use race as a proxy for biological differences, has led to treatment patterns that are inappropriate, unjust, and harmful to minoritized racial and ethnic groups. These patterns have contributed to persistent disparities in health and health care. To reduce these disparities, we recommend a race-aware approach to clinical decision support that considers social and environmental factors such as structural racism and social determinants of health. Recent policy changes in medical specialty societies and innovations in algorithm development represent progress on the path to dismantling race-based medicine. Success will require continued commitment and sustained efforts among stakeholders in the health care, research, and technology sectors. Increasing the diversity of clinical trial populations, broadening the focus of precision medicine, improving education about the complex factors shaping health outcomes, and developing new guidelines and policies to enable culturally responsive care are important next steps.

    View details for DOI 10.1377/hlthaff.2023.00545

    View details for PubMedID 37782875

  • Applying natural language processing to patient messages to identify depression concerns in cancer patients. Journal of the American Medical Informatics Association : JAMIA van Buchem, M. M., de Hond, A. A., Fanconi, C., Shah, V., Schuessler, M., Kant, I. M., Steyerberg, E. W., Hernandez-Boussard, T. 2024

    Abstract

    This study aims to explore and develop tools for early identification of depression concerns among cancer patients by leveraging the novel data source of messages sent through a secure patient portal.We developed classifiers based on logistic regression (LR), support vector machines (SVMs), and 2 Bidirectional Encoder Representations from Transformers (BERT) models (original and Reddit-pretrained) on 6600 patient messages from a cancer center (2009-2022), annotated by a panel of healthcare professionals. Performance was compared using AUROC scores, and model fairness and explainability were examined. We also examined correlations between model predictions and depression diagnosis and treatment.BERT and RedditBERT attained AUROC scores of 0.88 and 0.86, respectively, compared to 0.79 for LR and 0.83 for SVM. BERT showed bigger differences in performance across sex, race, and ethnicity than RedditBERT. Patients who sent messages classified as concerning had a higher chance of receiving a depression diagnosis, a prescription for antidepressants, or a referral to the psycho-oncologist. Explanations from BERT and RedditBERT differed, with no clear preference from annotators.We show the potential of BERT and RedditBERT in identifying depression concerns in messages from cancer patients. Performance disparities across demographic groups highlight the need for careful consideration of potential biases. Further research is needed to address biases, evaluate real-world impacts, and ensure responsible integration into clinical settings.This work represents a significant methodological advancement in the early identification of depression concerns among cancer patients. Our work contributes to a route to reduce clinical burden while enhancing overall patient care, leveraging BERT-based models.

    View details for DOI 10.1093/jamia/ocae188

    View details for PubMedID 39018490


Academic Appointments

Associate Professor, Medicine - Biomedical Informatics Research 

Associate Professor, Biomedical Data Science

Associate Professor, Surgery - General Surgery

Member, Stanford Cancer Institute

Professional Education

M.S., Stanford University, Health Services Research (2013)

Ph.D., University Claude Bernard, Lyon 1, Computational Biology (1999)

M.P.H., Yale University, Epidemiology (1993)

B.A., University California, Irvine, Psychology (1991)

B.S., University of California, Irvine, Biology (1991)