Principal Investigator
Bio
Publications
-
Perspectives on validation of clinical predictive algorithms.
NPJ digital medicine
2023; 6 (1): 86
More
View details for DOI 10.1038/s41746-023-00832-9
View details for PubMedID 37149704
-
A deep-learning algorithm to classify skin lesions from mpox virus infection.
Nature medicine
2023
More
Abstract
Undetected infection and delayed isolation of infected individuals are key factors driving the monkeypox virus (now termed mpox virus or MPXV) outbreak. To enable earlier detection of MPXV infection, we developed an image-based deep convolutional neural network (named MPXV-CNN) for the identification of the characteristic skin lesions caused by MPXV. We assembled a dataset of 139,198 skin lesion images, split into training/validation and testing cohorts, comprising non-MPXV images (n=138,522) from eight dermatological repositories and MPXV images (n=676) from the scientific literature, news articles, social media and a prospective cohort of the Stanford University Medical Center (n=63 images from 12 patients, all male). In the validation and testing cohorts, the sensitivity of the MPXV-CNN was 0.83 and 0.91, the specificity was 0.965 and 0.898 and the area under the curve was 0.967 and 0.966, respectively. In the prospective cohort, the sensitivity was 0.89. The classification performance of the MPXV-CNN was robust across various skin tones and body regions. To facilitate the usage of the algorithm, we developed a web-based app by which the MPXV-CNN can be accessed for patient guidance. The capability of the MPXV-CNN for identifying MPXV lesions has the potential to aid in MPXV outbreak mitigation.
View details for DOI 10.1038/s41591-023-02225-7
View details for PubMedID 36864252
-
The AI life cycle: a holistic approach to creating ethical AI for health decisions.
Nature medicine
2022
More
View details for DOI 10.1038/s41591-022-01993-y
View details for PubMedID 36163298
-
Peeking into a black box, the fairness and generalizability of a MIMIC-III benchmarking model
SCIENTIFIC DATA
2022; 9 (1): 24
More
Abstract
As artificial intelligence (AI) makes continuous progress to improve quality of care for some patients by leveraging ever increasing amounts of digital health data, others are left behind. Empirical evaluation studies are required to keep biased AI models from reinforcing systemic health disparities faced by minority populations through dangerous feedback loops. The aim of this study is to raise broad awareness of the pervasive challenges around bias and fairness in risk prediction models. We performed a case study on a MIMIC-trained benchmarking model using a broadly applicable fairness and generalizability assessment framework. While open-science benchmarks are crucial to overcome many study limitations today, this case study revealed a strong class imbalance problem as well as fairness concerns for Black and publicly insured ICU patients. Therefore, we advocate for the widespread use of comprehensive fairness and performance assessment frameworks to effectively monitor and validate benchmark pipelines built on open data resources.
View details for DOI 10.1038/s41597-021-01110-7
View details for Web of Science ID 000746595100001
View details for PubMedID 35075160
Google Scholar and PubMed.
Academic Appointments
Associate Professor, Medicine - Biomedical Informatics Research
Associate Professor, Biomedical Data Science
Associate Professor, Surgery - General Surgery
Member, Stanford Cancer Institute
Professional Education
M.S., Stanford University, Health Services Research (2013)
Ph.D., University Claude Bernard, Lyon 1, Computational Biology (1999)
M.P.H., Yale University, Epidemiology (1993)
B.A., University California, Irvine, Psychology (1991)
B.S., University of California, Irvine, Biology (1991)