AI experts talk about its potential promise, pitfalls at Stanford Medicine conference

Leaders from health care, industry and government convened virtually to find ways to ensure artificial intelligence improves care for caregivers as well as patients.

- By Alan Toth

Lloyd Minor; Fei-Fei Li; and Curtis Langlotz, director of the Center for Artificial Intelligence in Medicine and Imaging, spoke at Stanford Medicine's 2023 AI+Health Conference.

Artificial intelligence can help physicians avoid burnout, aid patients in translation and minimize bias, according to experts who spoke at conference on AI and health at Stanford Medicine — as long as it is thoughtfully employed.

“We need responsible humans to pair with AI for a system that’s truly responsible,” said Michael Pfeffer, MD, chief information officer of Stanford Health Care and associate dean of the Stanford School of Medicine. 

Leaders in medicine, academia, industry and government convened virtually Dec. 6 and 7 for the 2023 AI+Health Conference to discuss the impact of artificial intelligence on clinical practice and the health care industry.

Development and implementation

Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president for medical affairs of Stanford University, moderated a panel on responsible development and implementation of AI.

Minor began the conversation by asking the panel how they define responsible AI when it comes to health care. Valerie Delva, worldwide head of artificial intelligence at Amazon Web Services, said that there can be no static definition of responsibility in a rapidly advancing field like AI, but that the benchmarks for responsible AI include prioritizing fairness, minimizing bias, protecting data privacy and security, and ensuring transparency regarding use and risks. Pfeffer added that responsible use of AI depended on monitoring and updating already operational systems. Health care, like AI, is always advancing, he said, and clinicians must know if the algorithms they use are up to date.

Minor asked the panel if there were any pre-existing frameworks for creating regulations for AI. Troy Tazbaz, director of digital health at the U.S. Food and Drug Administration, made the somewhat sobering point that the legislation that governs how the FDA regulates medical devices was written in 1976 and is not applicable for AI software products. New regulation, he said, will need to be very flexible if it is to remain relevant.

Minor noted that electronic medical recordkeeping has come to burden clinicians with onerous, box-checking tasks. He asked the panel how we can ensure that AI will not cause similar problems. Phil Lindemann, vice president of data and analytics at Epic, which provides health systems technology, admitted that the burden created by medical record-keeping software was not something that anyone in the industry intended and has caused equal frustration for developers. He said that it’s important that AI integration with medical record keeping not burden clinicians further by requiring them to train AI assistance systems. 

“The most responsible thing we can be doing for physicians with AI is take all the mundane, repetitive tasks that they don’t want to do and figure out how we can automate them,” Lindemann said.

According to Pfeffer, potential AI developments that address burnout fall into two buckets — automation and augmentation. He said that some automation solutions were already employed at Stanford Medicine — including ambient voice technology which records the conversation between physicians and patients and uses it to generate correctly coded clinical notes that physicians can review and edit. Augmentation solutions such as evaluating symptoms, diagnostics and a patient’s medical history to make treatment recommendations are likely not imminent, he said.

Lindemann argued that some solutions can assist patients. Time and language barriers often prevent patients from asking every question they have regarding treatment plans, he noted. Lindemann said generative AI can be a universal translator for patients of varying education levels who speak a variety of languages.

“We need to think about these systems because patients are just going to copy their radiology report and put it into Google and ask, ‘What does this mean?’ We need to figure out a way to deploy these things in a safe container,” Lindemann said.

Safety and equity

At a panel on the Responsible AI for Safe and Equitable (RAISE) Health, Minor and Fei-Fei Li, the Sequoia Capital Professor and a professor of computer science, discussed their motivations for launching the RAISE Health initiative.

“AI is such a powerful tool that creates opportunity but also anxiety,” Li said. “There’s a profound benevolence that AI can bring to health care if we use it responsibly, to augment and give agency back to humans.”

Minor said he was excited about the potential of responsibly implemented AI to address inequities by improving access, such as helping general practitioners make better determinations as to whether rural or low-income patients need to see specialists. Minor said that Stanford Medicine had a responsibility to train the next generation of clinicians on AI tools.

“We in health care are usually the last to adopt technology,” he said. “All too frequently, delays are not caused by safety concerns but by the complicated and overly complex nature of our health care delivery system. We need to break down those barriers so that AI can be responsibly deployed in a timely fashion.”

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.

2023 ISSUE 3

Exploring ways AI is applied to health care