Thousands of artificial intelligence experts and enthusiasts gathered June 2 through 6 for Stanford Health AI Week, presented by RAISE Health, to explore a range of topics focused on AI in health and medicine. Six back-to-back events were held in person and online: the RAISE Health Symposium, the Artificial Intelligence in Medicine and Imaging Symposium, the AIMI Pediatrics Symposium, the AI in Medical Education Symposium, the Coalition for Health AI Leadership Summit and the Stanford HAI Healthcare AI Policy Workshop.
Most important among the many topics discussed: how to define the value of AI systems in biomedicine; how to better integrate AI technologies into the clinic and health system; and the ever-evolving relationship between humans and AI, especially how scientists and clinicians can thrive in an era exploding with new technology.
Lloyd Minor, MD, dean of the Stanford School of Medicine and vice president for medical affairs at Stanford University, emphasized that advancing AI technology alone isn’t enough for meaningful transformations in medicine. It’s crucial to also understand roadblocks: misaligned incentives in delivering AI-supported care, inadequate evaluation of technologies, outdated approaches to education, and fragmented or siloed data, to name a few.
“What we hope to focus on during this conference and in the days that follow is how we not only advance AI and its impact on health, health care delivery and discovery, but also how we address, confront and change the impediments that already are impacting the ability of AI to accelerate biomedical discovery; to accelerate the translation of those discoveries into better treatments, better predictions and preventions of disease altogether; and to transform the health care delivery system in our country and … around the world,” said Minor, who co-leads the RAISE Health Initiative.
RAISE co-lead James Landay, PhD, professor of computer science and co-director for the Stanford Institute for Human-Centered AI (HAI), reminded attendees of the original mission of HAI: to bring disciplines and minds from all the Stanford schools together to advance AI research, education, policy and practice to improve the human condition.

“I think you’ll see that reflected in today’s program,” Landay said. “As leaders in both medicine and AI, the folks in this room have a responsibility. We need to not only push the boundaries of what’s technologically possible but also ensure that these advancements serve humanity in the most beneficial and equitable ways possible. Today’s symposium is a testament to this commitment.”
Throughout the week, the symposia and workshops brought together AI experts from academia, industry, patient advocacy groups, government and nonprofits to parse the opportunities and challenges presented by widespread adoption of AI in health and medicine.
Below are the week’s top highlights:
How scientists and clinicians are redefining AI education and preparedness
With such rapid advancements, how should clinicians and scientists best learn current technologies and prepare the next generation for an AI-driven future?
- Veena Jones, MD, vice president and chief medical informatics officer at Sutter Health, pointed to an important contradiction: clinicians who use AI in their everyday life but are hesitant to use it on the job. “A lot of what we need to implement…is buy-in. We need clinicians to recognize the benefits and be willing to use it. At our health system our CEO brought together our top 500 leaders across the organization and had a couple of hour sessions just talking about AI. What is the future of AI? What does it mean for health care? What does it mean for our clinicians?” The time to do that is now, she said. “If we’re not adopting it, if we don’t have that appetite for it, we will be left behind. It’s not something that’s going to go away. The change management piece of implementation is so incredibly important, but it’s also one of the hardest things to do.” (Artificial Intelligence in Medicine and Imaging Pediatrics Symposium)
We’ve talked a lot about how we train the future physician to work with AI. I want to hear more about how we train AI to work with us.”
- AI offers a new opportunity to integrate how physicians work and learn. “When does the clinical decision support tool also become an educator? You’re learning as you work with [AI tools], but our current workflows don’t really support that,” said Kimberly Lomis, MD, vice president for medical education innovations at the American Medical Association. “We’ve allowed a system to evolve that doesn’t allow us time to think and learn. Somehow, we’re going to have to show how systems with their current view of productivity are short-sighted; investment in learning is a long game.” (RAISE Health Symposium)
- Brian Anderson, MD, chief executive officer of the Coalition for Health AI, shared more about the organization’s work in educating clinician groups. In addition to working with the American Nursing Association and Florida State University to build educational curriculum for nurses, the group is launching a program to work with specialty societies and their clinician leaders to create AI workflows specific to different contexts and specialties, including cardiology, family medicine, pediatrics, pathology and radiology. “We’re really excited about the opportunity to begin building out what the standard of care looks like in a future world with AI.” (Coalition for Health AI Leadership Summit)
How to better integrate AI into clinical practice
As AI becomes more prevalent in the clinic, health care professionals are identifying opportunities to integrate the technology into their practice, from developing AI technology with patient involvement and engagement, to ensuring the right patient data is being used to train algorithms for specific populations.

- A study published in 2024 estimated that if primary care doctors did everything they were supposed to do in a day — seeing patients, navigating medical records, charting and summarizing visits, etc. — they would be working 26.7-hour days. Panelists lobbied for using AI to do the “unsexy” things — paperwork and administrative tasks. “The job we do is hard enough, and if we can train [AI] to do some of the things that we don’t need to do, I think we can be even more cognitively in tune to our constantly changing environment,” said Carla Pugh, MD, PhD, professor of surgery and the Thomas Krummel Professor. “We’ve talked a lot about how we train the future physician to work with AI. I want to hear more about how we train AI to work with us.” (AI in Medical Education Symposium)
- Panelists spoke of the need for greater patient engagement in the development process of AI tools, saying it’s not enough to simply include a patient in the review process. Patients should be consulted at the beginning to establish trust through co-design. “We’ve created this illusion of control and trust that is often very performative. What that looks like is putting patients on a working committee or something that is not actually tied to the needs, priorities or interests of that specific community,” said Andrea Downing, co-founder and board president of The Light Collective, a nonprofit patient advocacy group. “When you’re trying to recruit patients, often we’re not interested in focus groups or gift cards. We’re interested in partnership, and when we say partnership, that means a different thing to our communities than it does to a lot of the folks that are seeking to partner with us. We have to relearn how to partner — think of us as colleagues, co-PIs…or people that you place on your board for governance.” (RAISE Health Symposium)
- Dong-han Yao, MD, physician informaticist and emergency physician at Stanford Medicine, described how to harness AI in the clinic, providing practical instruction on how to craft effective prompts, known as “prompt engineering.” It comes down to context. “Think about talking to an LLM like you’re presenting information to a highly educated, very capable friend who knows nothing about your particular field or your particular problem,” he said. Other tips: provide specific constraints on what type of answer you hope to receive — whether the user instructs the chatbot to respond in bulleted form, in less than three paragraphs or using clinical shorthand. (AI in Medical Education Symposium)
How AI is advancing biomedical research and health
From sophisticated foundation models to AI agents, health care professionals and researchers are increasingly drawing on AI to accelerate their work.
- Pediatricians might see their patients once every few months. AI agents could provide extra support, particularly for children who require additional attention. “Both pediatricians and teachers tell us that they’re craving a way to reach into the home, partner with the people who are with the child on a daily basis and extend their support,” said Catalin Voss, co-founder and chief technology officer of Ello Technology, a company that produces an AI agent (in this case, in the form of a virtual elephant) to support pediatric learning and development. “We believe that this is how the evolution of education and primary care will probably happen.” (AIMI Pediatrics Symposium)
- Russ Altman, MD, PhD, the Kenneth Fong Professor and a professor of bioengineering, of biomedical data science and of medicine, pointed to the advancement of foundation models, large AI models trained on massive datasets to adapt to a wide range of relevant tasks (think ChatGPT) as a key springboard for AI in biomedical research. The increased sophistication and accuracy are boons to tasks such as protein and molecule design, automated analysis of clinical images and better understanding cell function. (RAISE Health Symposium)
- Jonathan Chen, MD, assistant professor of medicine and biomedical data science, shared his experience using chatbots to refine his own conversational skill. “I can actually practice high-stakes conversation in a low-stakes environment, and I can use this computer to become better at the most human skills I need — communicating with patients and their families,” he said. “If I need to go talk to a patient, I practice with the chatbot first. If I have to go into a business negotiation, I practice with the chatbot first. If I have to ask my wife for something, I practice with the chatbot first. (AI in Medical Education Symposium)
How fairness, equity and responsibility shape AI evaluation and policy
How to implement AI solutions fairly and equitably is as important as ensuring accuracy and efficiency of any AI tool. That priority is coming to light in a variety of ways, shaping evaluation and policy.
- In a closed-door workshop, participants discussed gaps in current regulation and governance and brainstormed how novel governance structures could apply to the use of AI tools in insurance coverage decisions. They discussed what keeping “humans in the loop” means in a health care context, as well as how to navigate misaligned incentives to balance improvements in patient care with cost effectiveness. (Stanford HAI Healthcare AI Policy Workshop)
- When it comes to patient care, data usage, testing and evaluation of AI needs a reality check, according to Sanmi Koyejo, PhD, assistant professor of computer science. Only 5% of health care AI studies use real patient data, and when evaluating a model, 95% of evaluations focus on accuracy, compared with a mere 16% that assess bias or fairness. Those imbalances flag the need for more diverse and representative datasets to train AI, and additional approaches that build fairness into the development of models, such as methods to detect bias in real time. (RAISE Health Symposium)
- Throughout the events, the question “Do you need to know how AI works to use it?” surfaced multiple times, with mixed responses. Daniel Ting, MBBS, PhD, associate professor at Duke University and chief digital and data officer at Singapore National Eye Centre, said he believes that’s true — to an extent. “To drive a car, you don’t actually need to know how the car is being made, but the moment that you place your hands on the steering wheel, you need to make sure that you drive it safely and responsibly,” he said. “I think this is where 100% of the health care practitioners, whether clinicians, nurses or anyone who touches AI patient care, do need to make sure that they understand what they’re using and what the implications are when they actually use such technologies.” (AIMI Symposium)
What the future holds: Promise and challenges
While there’s ample reason to be optimistic, questions remain about how AI and medicine will intertwine in the future.
- The physician-AI dynamic is changing. Speakers called into question the “fundamental theorem” that says human + AI = better outcomes than either could achieve alone. Studies conducted by a Stanford Medicine team show that’s not always the case, with AI outperforming physicians in some diagnostic and reasoning tasks. What should the core competencies of a doctor be if an algorithm can outdo physicians when it comes to medical knowledge? Panelist Bryant Lin, MD, clinical professor of medicine, boiled it down to something simple: human connection. “I’ve been seeing the same patients for almost 20 years. I’ve attended the funerals of patients. I’ve sat by their bedside. AI is not going to be there when you’re taking your last breaths and looking for comfort. To me, medicine is at its core about human connection,” Lin said. (AI in Medical Education Symposium)
- In closing the AIMI Symposium, Nigam Shah, MBBS, PhD, chief data scientist at Stanford Health Care, said he hopes to see the relationship between humans and AI evolve. “We might be falling into the Turing Trap,” he said, referring to a phrase coined by Erik Brynjolfsson, PhD, the Jerry Yang and Akiko Yamazaki Professor at HAI, that refers to developing AI that’s redundant with what humans already do. “That’s a very replacement mindset. We need to start thinking about other jobs we can do with a machine, or human plus machine, that we would never imagine doing as a human, unaided. I would love to see more of that next year.” (AIMI Symposium)
- Eric Horvitz, MD, PhD, chief scientific officer at Microsoft, said he thinks we’re living in a transformational time. “This is how it feels to live in a time when, I believe 500 years from now, looking back at the next 25 to 50 years, this period of time will have a name. This is when all these things are happening with machine intelligence coming into the world and interweaving in a variety of ways…and certainly the nature of health care delivery will be shifting and changing.” The name he would give this era: the Computational Revolution. (RAISE Health Symposium)
- Natalie Pageler, MD, chief medical information officer of Stanford Medicine Children’s Health, reminded attendees that children aren’t just little adults; there are many reasons AI developers must take a fundamentally different approach to applying these tools to pediatric health care. “Our datasets are smaller; our patients evolve quickly from fragile neonates to fiercely independent teenagers; our regulatory and ethical considerations are different; and, perhaps most importantly, our outcomes aren’t measured in months lived, but in lifetimes of potential realized,” Pageler said. “We’ve been reminded of the hard truths — the dangers of applying adult training models to pediatrics [and] the gaps in our datasets that leave out some of the most vulnerable.” But, she said, she was inspired by the community of pediatric innovators who gathered during this event. “These aren’t people who are just waiting for the future of health care; they are actively working to shape and improve care for children with intentionality, humility and hope.” (AIMI Pediatrics Symposium)
Curt Langlotz, PhD, director of Stanford Center for Artificial Intelligence in Medicine and Imaging; Russ Altman, MD, PhD, the Kenneth Fong Professor and a professor of bioengineering, of biomedical data science and of medicine; Natalie Pageler, MD, chief medical information officer of Stanford Medicine Children’s Health; and Sanmi Koyejo, PhD, assistant professor of computer science discuss the past year’s biggest advances in artificial intelligence.