March 12 Mar 12
2024
12:00 PM - 01:00 PM
Tuesday Tue

Location

Loading Map...

Stanford University School of Medicine

291 Campus Dr
Stanford, CA 94305
Get Directions
Event

Medical Physics Seminar - James Lamb

Radiotherapy Quality and Safety: Automation, AI, and Human Factors

Time:
12:00pm – 1:00pm Seminar & Discussion

Location:
Zoom Webinar

Webinar Registration:
https://stanford.zoom.us/webinar/register/WN_Y5QwpIWMQ3ugvdnzT_iyEQ

Check your email for the Zoom webinar link after you have registered

Speaker

Dr. James Lamb, Ph.D., Associate Professor, Vice Chair and Director of Medical Physics at the University of California, Los Angeles

Dr. Lamb received his doctorate in experimental high-energy particle physics from the University of California at Santa Barbara in 2009. Subsequently, he completed postdoctoral research in the Department of Radiation Oncology at Washington University in St. Louis. He joined the faculty at UCLA in 2010. From 2014-2018 he was lead physicist for UCLA’s ViewRay service. From 2018-2022 he was Director of Dosimetry and Planning. Currently, he is Vice Chair and Director of Medical Physics. Since 2012 he has taught Fundamentals of Dosimetry in UCLA’s Physics and Biology in Medicine graduate program. He leads a federally-funded research team and is a member of the Radiation Oncology Safety, Automation and Machine Learning (ROSAML) research group.

Abstract

Automation and artificial intelligence (AI) are increasingly used in radiation oncology and show great promise to promote safety and increase workflow efficiency. Conversely, maladroit implementations may increase errors and slow down work. We discuss theoretical approaches to understanding and mitigating human error, from Reason’s theory of latent errors to cognitive biases in medical decision making. Practical approaches to implementing new automation-based safety systems in the clinic are described, with a focus on analysis of risk such as over-reliance and alert fatigue. We present machine-learning and AI-based systems being developed at UCLA for the mitigation of human errors in the planning and delivery of radiotherapy and for the risks of automation and AI, such as over-reliance and alert fatigue, and approaches to understanding the interaction of technology with human factors, from Reason’s theory of latent errors to cognitive biases in medical decision making. Practical approaches to implementing new automation-based safety systems in the clinic are described, with a central focus on an AI system being developed at UCLA to automate image-guidance review in order to make radiotherapy delivery safer and more resource efficient.

•       Theoretical approaches to understanding and mitigating human errors

•       Automating the image review function of radiation therapists

•       Practical automation in the clinic: promise, perils, and pitfalls