Artificial intelligence experts discuss how to integrate trustworthy AI into health care, why multi-disciplinary collaboration is crucial and the potential for generative AI in research.
Meet our first faculty champion for AI medical education, hear from Google’s chief health officer, and learn about the promise of AI built on multimodal data.
Faculty predict AI’s next move for the year ahead, a new framework for evaluating large language models’ diagnostic abilities, and an algorithm that helps predict subtypes of Type 2 diabetes.
A dataset optimized for AI-assisted diabetes research, a poll on what older adults think of AI-generated health information, a new podcast hosted by Maya Adam, “red teaming” explained, and more.
Assessment in medical education has evolved through a sequence of eras each centering on distinct views and values. These eras include measurement (e.g., knowledge exams, objective structured clinical examinations), then judgments (e.g., workplace-based ...
Stanford Medicine doctors and researchers are modifying existing chatbots to perform well in a frontier of AI-enhanced medicine: the doctor-patient interaction.
Stanford Medicine integrates AI-powered listening technology that takes notes for health care providers, allowing them to spend more time with patients and less time on administrative tasks.
This qualitative study examines issues identified by representatives from different health sector organizations on the development of artificial intelligence and sharing of health data.
Stanford Medicine researchers are helping patients use AI image-generation software as part of a unique study that aims to quantify how creating art aids patients in their recovery.
While cardiac sphericity was the focus of Stanford Medicine-led research, the possibility of data science expanding the reach of biomedical science was its true core, researchers say.
This Special Communication presents a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity.
Leaders from health care, industry and government convened virtually to find ways to ensure artificial intelligence improves care for caregivers as well as patients.
As the use of artificial intelligence has spread rapidly throughout the US health care system, concerns have been raised about racial and ethnic biases built into the algorithms that often guide clinical decision making.
Health care providers must reckon with inherent race-based biases in medicine, which can reinforce false stereotypes in algorithms and lead to improper treatment recommendations or late diagnoses.
Decision-support tools for helping physicians follow clinical guidelines are increasingly using artificial intelligence, highlighting the need to remove bias from underlying algorithms.
Stanford health care AI scholar discusses implications of the ability of AI to predict the race or ethnicity of patients, based solely on medical images such as X-rays and ultrasounds.
The ethical impact of AI algorithms in healthcare should be assessed at each phase, from data creation to model deployment, so that their use narrows rather than widens inequalities.
New artificial intelligence tools have the potential to revolutionize health care. But Stanford researchers argue that disparities could worsen without intervention now.
Stanford authors introduce HARMONI, a three-dimensional (3D) computer vision and audio processing method for analyzing caregiver-child behavior and interaction from observational videos. HARMONI operates at subsecond resolution, estimating 3D mesh representations and spatial interactions of humans, and adapts to challenging natural environments using an environment-targeted synthetic data generation module.
To realize the benefits of AI in detecting diseases such as skin cancer, doctors need to trust in the decisions rendered by AI. That requires better understanding of its internal reasoning.
Stanford Medicine researchers devise a new artificial intelligence model, SyntheMol, which creates recipes for chemists to synthesize the drugs in the lab.
Scholars develop a new model to surface high-risk messages and dramatically reduce the time it takes to reach a patient in crisis, from 10 hours to 10 minutes.