Principal Investigator

Professor of Medicine (Biomedical Informatics), of Biomedical Data Science, of Surgery and, by courtesy, of Epidemiology and Population Health

Bio

Dr. Hernandez-Boussard is an Associate Dean of Research and Professor of Medicine (Biomedical Informatics), Biomedical Data Sciences, Surgery and Epidemiology & Population Health (by courtesy) at Stanford University. With a rich background and vast expertise in biomedical informatics, health services research, and epidemiology, she is at the forefront of advancing healthcare through the development, evaluation and application of innovative methods. Through her research, she aims to effectively monitor, measure, and predict equitable healthcare outcomes. By leveraging real-world data, her team works diligently to construct a solid body of evidence that can significantly enhance patient outcomes, streamline healthcare delivery, and provide valuable guidance for health policy decisions. In addition, Dr. Hernandez-Boussard focuses intensively on mitigating bias and enhancing equity within artificial intelligence applications in healthcare settings. Through her research and evaluation of AI technology, she seeks to advance healthcare practices while ensuring that diverse populations receive equitable resources, care, and outcomes.

Publications

  • Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA network open Chin, M. H., Afsar-Manesh, N., Bierman, A. S., Chang, C., Colon-Rodriguez, C. J., Dullabh, P., Duran, D. G., Fair, M., Hernandez-Boussard, T., Hightower, M., Jain, A., Jordan, W. B., Konya, S., Moore, R. H., Moore, T. T., Rodriguez, R., Shaheen, G., Snyder, L. P., Srinivasan, M., Umscheid, C. A., Ohno-Machado, L. 2023; 6 (12): e2345050

    Abstract

    Importance: Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income.Objective: To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.Evidence Review: The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback.Findings: The panel developed a conceptual framework to apply guiding principles across an algorithm's life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms.Conclusions and Relevance: Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.

    View details for DOI 10.1001/jamanetworkopen.2023.45050

    View details for PubMedID 38100101

  • Perceptions of Data Set Experts on Important Characteristics of Health Data Sets Ready for Machine Learning: A Qualitative Study. JAMA network open Ng, M. Y., Youssef, A., Miner, A. S., Sarellano, D., Long, J., Larson, D. B., Hernandez-Boussard, T., Langlotz, C. P. 2023; 6 (12): e2345892

    Abstract

    The lack of data quality frameworks to guide the development of artificial intelligence (AI)-ready data sets limits their usefulness for machine learning (ML) research in health care and hinders the diagnostic excellence of developed clinical AI applications for patient care.To discern what constitutes high-quality and useful data sets for health and biomedical ML research purposes according to subject matter experts.This qualitative study interviewed data set experts, particularly those who are creators and ML researchers. Semistructured interviews were conducted in English and remotely through a secure video conferencing platform between August 23, 2022, and January 5, 2023. A total of 93 experts were invited to participate. Twenty experts were enrolled and interviewed. Using purposive sampling, experts were affiliated with a diverse representation of 16 health data sets/databases across organizational sectors. Content analysis was used to evaluate survey information and thematic analysis was used to analyze interview data.Data set experts' perceptions on what makes data sets AI ready.Participants included 20 data set experts (11 [55%] men; mean [SD] age, 42 [11] years), of whom all were health data set creators, and 18 of the 20 were also ML researchers. Themes (3 main and 11 subthemes) were identified and integrated into an AI-readiness framework to show their association within the health data ecosystem. Participants partially determined the AI readiness of data sets using priority appraisal elements of accuracy, completeness, consistency, and fitness. Ethical acquisition and societal impact emerged as appraisal considerations in that participant samples have not been described to date in prior data quality frameworks. Factors that drive creation of high-quality health data sets and mitigate risks associated with data reuse in ML research were also relevant to AI readiness. The state of data availability, data quality standards, documentation, team science, and incentivization were associated with elements of AI readiness and the overall perception of data set usefulness.In this qualitative study of data set experts, participants contributed to the development of a grounded framework for AI data set quality. Data set AI readiness required the concerted appraisal of many elements and the balancing of transparency and ethical reflection against pragmatic constraints. The movement toward more reliable, relevant, and ethical AI and ML applications for patient care will inevitably require strategic updates to data set creation practices.

    View details for DOI 10.1001/jamanetworkopen.2023.45892

    View details for PubMedID 38039004

  • Promoting Equity In Clinical Decision Making: Dismantling Race-Based Medicine. Health affairs (Project Hope) Hernandez-Boussard, T., Siddique, S. M., Bierman, A. S., Hightower, M., Burstin, H. 2023; 42 (10): 1369-1373

    Abstract

    As the use of artificial intelligence has spread rapidly throughout the US health care system, concerns have been raised about racial and ethnic biases built into the algorithms that often guide clinical decision making. Race-based medicine, which relies on algorithms that use race as a proxy for biological differences, has led to treatment patterns that are inappropriate, unjust, and harmful to minoritized racial and ethnic groups. These patterns have contributed to persistent disparities in health and health care. To reduce these disparities, we recommend a race-aware approach to clinical decision support that considers social and environmental factors such as structural racism and social determinants of health. Recent policy changes in medical specialty societies and innovations in algorithm development represent progress on the path to dismantling race-based medicine. Success will require continued commitment and sustained efforts among stakeholders in the health care, research, and technology sectors. Increasing the diversity of clinical trial populations, broadening the focus of precision medicine, improving education about the complex factors shaping health outcomes, and developing new guidelines and policies to enable culturally responsive care are important next steps.

    View details for DOI 10.1377/hlthaff.2023.00545

    View details for PubMedID 37782875

  • Contemporary attitudes and beliefs on coronary artery calcium from social media using artificial intelligence. NPJ digital medicine Somani, S., Balla, S., Peng, A. W., Dudum, R., Jain, S., Nasir, K., Maron, D. J., Hernandez-Boussard, T., Rodriguez, F. 2024; 7 (1): 83

    Abstract

    Coronary artery calcium (CAC) is a powerful tool to refine atherosclerotic cardiovascular disease (ASCVD) risk assessment. Despite its growing interest, contemporary public attitudes around CAC are not well-described in literature and have important implications for shared decision-making around cardiovascular prevention. We used an artificial intelligence (AI) pipeline consisting of a semi-supervised natural language processing model and unsupervised machine learning techniques to analyze 5,606 CAC-related discussions on Reddit. A total of 91 discussion topics were identified and were classified into 14 overarching thematic groups. These included the strong impact of CAC on therapeutic decision-making, ongoing non-evidence-based use of CAC testing, and the patient perceived downsides of CAC testing (e.g., radiation risk). Sentiment analysis also revealed that most discussions had a neutral (49.5%) or negative (48.4%) sentiment. The results of this study demonstrate the potential of an AI-based approach to analyze large, publicly available social media data to generate insights into public perceptions about CAC, which may help guide strategies to improve shared decision-making around ASCVD management and public health interventions.

    View details for DOI 10.1038/s41746-024-01077-w

    View details for PubMedID 38555387

    View details for PubMedCentralID PMC10981728

Academic Appointments

Associate Professor, Medicine - Biomedical Informatics Research 

Associate Professor, Biomedical Data Science

Associate Professor, Surgery - General Surgery

Member, Stanford Cancer Institute

Professional Education

M.S., Stanford University, Health Services Research (2013)

Ph.D., University Claude Bernard, Lyon 1, Computational Biology (1999)

M.P.H., Yale University, Epidemiology (1993)

B.A., University California, Irvine, Psychology (1991)

B.S., University of California, Irvine, Biology (1991)