AI’s future in medicine the focus of Stanford Med LIVE event

Leaders of Stanford Medicine discuss artificial intelligence in health and medicine; its usefulness in research, education and patient care; and how to responsibly integrate the technology.

- By Hanae Armitage

Nigam Shah, Natalie Pageler, David Magnus and Sylvia Plevritis , with panel moderator Michael Pfeffer, discussed ways that artificial intelligence can improve patient care and lighten providers' workload.
Dorin Greenwood

Artificial intelligence-powered health care, generative models in medical research and the ethics of broad AI integration were key topics at the March 18 Stanford Med LIVE event featuring experts from across Stanford Medicine.

Panelists at the event explored what AI is; why it’s poised to change the future; and how it can support practices in research, education and patient care. It was a precursor to the first RAISE Health Symposium coming in May and sets the table for further exploration of how this current wave of excitement, fueled by advancements in generative AI technology and access to massive amounts of data, can be applied to health care and medicine.

“Now, with an explosion in new AI capabilities, we are beginning to see the full promise of this technology — as a tool with the potential to transform patient outcomes, advance biomedical education and accelerate research,” said Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president of medical affairs at Stanford University.

Minor also addressed the obligation institutions like Stanford Medicine face to deploy AI tools responsibly. In partnership with the Stanford Institute for Human-Centered Artificial Intelligence, Stanford Medicine launched the Responsible AI for Safe and Equitable Health Initiative — RAISE Health — in June 2023 to ensure AI is developed, used and evaluated in medicine following best practices and the highest ethical standards.

In recent years, Stanford Medicine has begun tapping into AI’s potential applications. “At Stanford Health Care, we already have more than 30 different technology applications that leverage AI, and we will see many more of these tools coming online in the not-too-distant future,” said David Entwistle, president and CEO at Stanford Health Care. “We’re entering an exciting era of AI innovation in health and medicine, and Stanford Medicine is uniquely poised to lead.”

But, as Stanford Medicine’s other key leader pointed out, it will be critical that AI models represent all populations fairly, equitably and without bias. “To date, AI systems in medicine have been primarily trained on data from adults, as there are special privacy considerations for the use and availability of pediatric patient data,” said Paul King, president and CEO of Stanford Medicine Children’s Health. “We are actively solving this challenge at Stanford Medicine so that even our youngest patients can benefit from the same technology advances, while maintaining the necessary robust protections.”

The panel discussion, moderated by Michael Pfeffer, MD, chief information officer for Stanford Health Care and the School of Medicine, featured four speakers from Stanford Medicine:

  • David Magnus, PhD, professor of medicine, biomedical ethics and pediatrics and the Thomas A. Raffin Professor in Medicine and Biomedical Ethics
  • Natalie Pageler, MD, chief medical information officer at Stanford Medicine Children’s Health and clinical professor of pediatrics and medicine
  • Sylvia Plevritis, PhD, chair of biomedical data science and professor of radiology
  • Nigam Shah, PhD, chief data scientist at Stanford Health Care, professor of medicine and associate dean for research

AI is having a moment

Simply put, Shah told the audience, AI is the application of data by an algorithm that performs a task on behalf of, or in assistance to, a human being. The use of AI has exploded as generative AI models, such as ChatGPT — which can assimilate existing data and information and apply it in a human-like fashion — have grabbed the world’s attention.

The panelists discussed how to harness that promise, honing the broader hullabaloo into something mission-driven, impact-focused and ethical. At Stanford Medicine, that implementation is surfacing in a variety of ways, from helping kids manage Type 1 diabetes, to solving challenges in data scarcity, to creating new drugs and therapeutics with higher efficiency and lower toxicity. Outside of research, Pfeffer also pointed to two uses that are poised to enhance clerical practices for clinicians: ambient listening tools that generate clinical notes for doctors and large language models that draft responses to patient messages.

As panelists shared sentiments of anticipation and excitement, all emphasized human-centric, responsible integration of AI. “There’s so much more to providing care than just what AI can provide,” Pageler said. “It’s important that we all learn to use it, but not to be worried about being replaced.”

Deploying AI in health care

The panelists acknowledged that AI’s success in health and medicine will largely depend on the thoughtfulness and fairness with which algorithms are folded into practice.

Algorithms are not inherently neutral, Magnus said. If the data is biased, the algorithm will be too. “AI is often just a mirror. Data reflects social determinants of health; it can reflect biases in physician behavior,” he said. “That can be a problem because the models that learn from that data can either reify those biases, or we can turn them around to combat the problems that already exist.”

The AI experts say it’s crucial to look at the downstream effects of adopting AI into something as complex as a health care system. That means seeking guidance from like-minded entities such as the Coalition for Health AI and tools such as the FURM (fair, useful, reliable model) assessment, a system spearheaded by Shah and others who seek to determine whether AI tools provide fair, useful and reliable model guided care. “The point is to look at the ripple effects of using a model,” Shah said, “to think beyond the model and look at the workflow impact on real people, like workforce, patients, IT staff or nursing staff.”

These are big challenges for those aiming to get AI right. Nonetheless, the Stanford Medicine panelists shared an optimism for the future they are helping craft — largely because of where they get to do it. “Not only do we have a fantastic medical center, but we have an entire university that’s within walking distance, and we connect every day with our colleagues from medicine, engineering, humanities and other specialties,” Plevritis said. “I feel like we’re on the precipice of new knowledge, and we’re truly on the best campus to see it through.”

For more news about responsible AI in health and medicine, sign up for the RAISE Health newsletter.

Register for the RAISE Health Symposium on May 14.

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit

2024 ISSUE 1

Psychiatry’s new frontiers