Resource Hub

About

On this page you will find Stanford’s latest AI research in health and medicine, including access to datasets, tools, research papers, and the institutes, centers, and labs leading this work.

Featured Items


AI Development Resources

Stanford Health Care and Stanford School of Medicine Secure GPT (beta)

Stanford Health Care and Stanford School of Medicine Secure GPT is powered by GPT 4.0 and provides a safe, secure environment that you can use to ask questions, summarize text and files, and help solve a range of complex problems.

Stanford Medicine Children's Health AskDigi

Stanford Medicine Children's Health AskDigi is a tool designed for SMCH employees to safely process PDF documents, automate the generation and summarization of reports, standardize communications, and extract vital data, all while strictly adhering to our PHI safety standards.

Stanford Text De-identification Algorithm

Stanford de-identifier was trained on a variety of radiology and biomedical documents with the goal of automatising the de-identification process while reaching satisfactory accuracy for use in production. 

AI-Ready Clinical Datasets Shared by AIMI

Stanford AIMI shares annotated data to foster transparent and reproducible collaborative research to advance AI in medicine. Datasets are available to the public to view and use without charge for non-commercial research purposes.

EHRSHOT: An EHR Benchmark for Few-Shot Evaluation and Foundation Models

EHRSHOT contains de-identified structured data from the electronic health records (EHRs) of 6,739 patients from Stanford Medicine. Unlike MIMIC-III/IV and other popular EHR datasets, EHRSHOT is longitudinal and not restricted to ICU/ED patients.

FactEHR

FactEHR is a dataset consisting of full document fact decompositions for 2,168 clinical notes spanning four types from three hospital systems.

HARMONI

HARMONI is a three-dimensional (3D) computer vision and audio processing method for analyzing caregiver-child behavior and interaction from observational videos. HARMONI operates at subsecond resolution, estimating 3D mesh representations and spatial interactions of humans, and adapts to challenging natural environments using an environment-targeted synthetic data generation module. Deployed on 500 hours from the SEEDLingS dataset, HARMONI generates detailed quantitative measurements of 3D human behavior previously unattainable through manual efforts or 2D methods.

INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis

Synthesizing information from various data sources plays a crucial role in the practice of modern medicine. Current applications of artificial intelligence in medicine often focus on single-modality data due to a lack of publicly available, multimodal medical datasets.

Stanford AIMI Dataset Index

Stanford AIMI has launched a community-driven resource of health AI datasets for machine learning in healthcare as part of a vision to catalyze sharing well curated, de-identified clinical data sets.

ACCEPT-AI

ACCEPT-AI is a framework of recommendations for the safe inclusion of pediatric data in artificial intelligence and machine learning (AI/ML) research. It has been built on fundamental ethical principles of pediatric and AI research and incorporates age, consent, assent, communication, equity, protection of data, and technological considerations. ACCEPT-AI has been designed to guide researchers, clinicians, regulators, and policymakers and can be utilized as an independent tool, or adjunctively to existing AI/ML guidelines.

Clinical Excellence Research Center's Computer Vision Initiative

Applies cutting-edge computer vision technologies to efforts to boost the reliability of clinical care. Sensors passively capture data from the clinical environment, while machine-learning algorithms are developed to automatically detect patient and staff activities across a variety of settings from hospital (ICU, OR, units) to outpatient (clinics, home-based care).

A framework for evaluation Fair, Useful, and Reliable AI Models in healthcare systems (FURM)

The Data Science team at Stanford Health Care has developed a mechanism to identify fair, useful and reliable AI models (FURM) by conducting an ethical review to identify potential value mismatches, simulations to estimate usefulness, financial projections to assess sustainability, as well as analyses to determine IT feasibility, design a deployment strategy, and recommend a prospective monitoring and evaluation plan.


The FURM assessment uses APLUS, a reusable framework for quantitatively assessing via simulation the utility gained from integrating a model into a clinical workflow. Click here to learn more about APLUS.

Our Partners