RAISE Health Seed Grants

RAISE Health established a new seed grant program in 2024 to support research and education program proposals designed to advance responsible AI innovation in medicine. The program offers an opportunity for each winner to receieve up to $100,000 each for a total of 5 funded projects.

Advances in artificial intelligence technology offer unprecedented opportunities to improve health and medicine, from accelerating biomedical research to strengthening care delivery and patient outcomes. However, in a field as sensitive as medicine, it is vital that the pursuit of these aims is approached responsibly and with a commitment to maximizing the benefits for all. 

“These seed grants reflect our deep commitment to projects that align with the principles of RAISE Health — responsible AI for safe and equitable health. By fostering research and initiatives that emphasize transparency, fairness and accountability, we can harness the transformative power of AI to improve patient outcomes, accelerate new discoveries and enhance the quality of care for all communities."

Lloyd Minor, MD, Dean of the Stanford School of Medicine and Vice President for Medical Affairs at Stanford University

View the story here


2024 Seed Grant Recipients

Heart disease is the leading cause of death in the United States and worldwide, but many people do not know they are at high risk. Doctors can look for early signs of heart disease by checking for calcium buildup in the heart’s arteries. A special type of scan, called a gated CT scan or coronary artery calcium scan, can detect this buildup and help doctors decide if a patient needs medicine to prevent a heart attack or stroke. But these scans are not commonly done, and insurance often does not cover them. However, millions of people get other kinds of chest CT scans for other health reasons. These scans can also show calcium in the heart, but this finding is often not reported or acted upon, and so patients frequently do not know about their risk. We are working to change this, and provide preventive treatment based upon this unused but important information.

We have created an artificial intelligence tool that can find this calcium on CT scans that were performed for other health reasons. This tool has been cleared by the FDA, and we want to use it to help more people get appropriate preventive cardiology care. The first goal of this study is to test the algorithm in a new patient population, specifically patients getting screened for lung cancer. These patients are at high risk for heart disease because of their smoking history. The second goal of our study is to understand the ethical and legal implications of a clinical program to screen for and treat patients identified as high risk by our AI tool. The final goal is to build a clinical program to provide high risk patients identified by AI with appropriate preventive cardiology care. 

Over 1000 health AI algorithms are FDA-approved or -cleared. However, only a handful have successfully been deployed clinically to help patients. The goal of this project is to responsibly translate novel AI tools and research into sustainable, real-world patient impact through an AI-augmented clinical program. 

Name

Role

School

Department

Sneha S. Jain

Main PI

School of Medicine

Cardiovascular Medicine

Akshay Chaudhari

Co-PI

School of Medicine

Radiology

David Maron

Co-PI

School of Medicine

Medicine - Stanford Prevention Research Center

Fatima Rodriguez

Co-PI

School of Medicine

Cardiovascular Medicine

Comorbidity indices are widely used by clinicians and researchers to assess how multiple health conditions affect treatment response and patient outcomes. However, the most commonly used index, developed in 1987, fails to account for modern health challenges, including mental health disorders, substance abuse, and social determinants of health. Additionally, its applicability across diverse populations remains uncertain, potentially contributing to health disparities.

This research project aims to develop a more comprehensive and equitable comorbidity index using artificial intelligence and diverse patient records. By integrating socioeconomic factors alongside physical and mental health conditions, the new index will enable more personalized health assessments. The team will create user-friendly tools for healthcare providers and researchers to facilitate data-driven decision-making at point of care and support inclusive health research.

By modernizing this fundamental clinical tool, the project addresses a critical gap in healthcare assessment, ultimately promoting more equitable health outcomes across diverse patient populations.

Name

Role

School

Department

Tina Hernandez-Boussard

Main PI

School of Medicine

Biomedical Informatics

Lu Tian

Main PI

School of Medicine

Biomedical Informatics

Ron Li

Co-PI

School of Medicine

Hospital Medicine

Advancements in AI have the potential to enhance primary care by streamlining medical tasks and improving diagnostic accuracy. However, these benefits cannot be fully realized if patients, particularly from marginalized communities of color, distrust AI-based healthcare tools. People with limited health literacy and/or English proficiency often exhibit lower trust in both AI and healthcare systems. Failing to consider how people from diverse cultural and linguistic backgrounds perceive the benefits and harms of AI in healthcare risks further eroding trust in healthcare providers (Lee et al., 2021). This project will address the urgent need to understand how people from marginalized, multilingual communities perceive and respond to the introduction of AI-tools in primary care. We will first conduct focus groups and interviews with 120 Black, Latino/a/e, and Asian American participants to explore the barriers to trusting AI in healthcare. We’ll collaborate closely with community organizations to ensure culturally relevant insights. Second, we will develop and test a multilingual psychometric scale to measure trust in AI in healthcare, using responses from three large groups of Spanish and Mandarin-speaking individuals (1,300 people per group).We will leverage our team’s expertise on human-centered AI, primary care among marginalized communities, and developing culturally responsive interventions to develop empirically-based pathways to build trust in AI in primary care so that Americans of all backgrounds can make informed decisions about their health, including regarding new AI tools that can enhance providers’ ability to provide timely, high-quality care.  This project will help healthcare providers build patient trust in AI and support people from marginalized communities in navigating these new technologies.

Name

Role

School

Department

Jeff Hancock

Main PI

School of Humanities and Sciences

Communication

Lee Sanders

Co-PI

School of Medicine

General Pediatrics

Ensuring that artificial intelligence (AI) tools in healthcare operate safely and effectively requires robust evaluation within realistic clinical contexts. Traditional evaluation methods often rely on standardized tests that fail to capture the full complexity of patient care, while manually curated benchmark datasets can be both time-consuming and limiting. CuraBench introduces a configurable benchmark generation system designed to create customized synthetic datasets tailored to specific clinical needs, patient populations, and use cases.

The system's innovative approach enables the generation of diverse evaluation scenarios—from assessing how AI systems interpret longitudinal patient histories and assist in diagnosis, to evaluating their ability to accurately summarize medical records. By leveraging real-world healthcare data, CuraBench produces realistic yet synthetic scenarios that can be configured to match the requirements of various medical settings, specialties, and demographics.

By streamlining the creation of comprehensive benchmark datasets, CuraBench makes it both easier and more cost-effective to thoroughly evaluate AI systems and verify the presumed benefits of using AI tools in healthcare. This work represents a significant step toward responsible AI deployment, ensuring that each model is rigorously tested in environments that mirror its intended clinical use.

Name

Role

School

Department

Nigam Shah

Main PI

School of Medicine

Biomedical Informatics

Sanmi Koyejo

Co-PI

School of Engineering

Computer Science

Keith Morse

Co-PI

School of Medicine

Pediatrics

Without prompt action, the lack of proper AI training among clinicians will risk patient safety and deepen existing health disparities. To address this critical educational gap, we will design and implement a modular curriculum to empower clinicians to use AI effectively and responsibly in patient care. The Improving Medical Practice AI Competency Training (IMPACT) curriculum will be the first program to provide comprehensive, interdisciplinary, competency-based AI training across all stages of clinical education, from preclinical students to practicing clinicians. The curriculum focuses on core AI competencies identified in preliminary research, including AI literacy, ethical and legal awareness, patient communication, and model evaluation for clinical relevance and safety.

Name

Role

School

Department

Curtis Langlotz

Main PI

School of Medicine

Radiology

Jonathan Chen

Co-PI

School of Medicine

Biomedical Informatics

Malathi Srinivasan

Co-PI

School of Medicine

Primary Care and Population Health

Eric Strong

Co-PI

School of Medicine

Hospital Medicine

Alaa Youssef 

Medical Education Lead Research Scholar     

School of Medicine

Radiology


About the Call for Proposals

The 2024 call was particularly focused on research proposals that advance evaluation methods for fair algorithms in healthcare or investigate ethical or legal implications of AI in healthcare. We are also interested in proposals for education programs that help patients, care providers, and researchers navigate AI advances.

These projects should aim to build the trustworthiness of AI systems, facilitate the broader acceptance and effective integration of responsible AI technologies in medicine, and drive improvements in population health by ensuring that all communities benefit from AI innovations.

Applications are closed. If you have questions, please contact us at raisehealth-grants@lists.stanford.edu.