Intensive Care Unit Clinical Pathway Support


Activity detection in Intensive Care Units (ICUs) is currently performed manually by trained personnel, primarily nurses, who log the activities as they occur. This process is both expensive and time consuming. Our goal is to design a system which automatically gives an annotated list of all activities that occurred in the ICU over the day. Overall, this system will reduce the monitoring workload of trained personnel, and lead to a quicker and safer recovery of the patient, while providing benefits such as activity-based costing.

Activity recognition in hospitals is a task that has not received much attention in the past. Some of the main reasons for this gap in research are the lack of sensors installed in hospitals and the difficulty in obtaining access to the relevant data due to its sensitive nature. Thanks to our partnering hospital, we have access to depth sensors installed in eight intensive care unit (ICU) rooms.

We are developing a computer vision system capable of automatically detecting the following activities:

  • Stage 1: patient getting out of bed, patient getting out of bed and walking, and a nurse performing oral care.
  • Stage 2: clinician performing ultrasound, x-ray, turning patient over in bed, and patient getting in/out of bed.
  • Stage 3: various patient mobility activities such as patient getting in/out of a bed/chair with or without assistance.

Once our system can successfully log the basic activities above, we plan to expand it to detect anomalies such as emergency situations. To do so, we could potentially use a dataset of simulations of different emergencies (e.g., patient falls on the floor).

Pilot Partnerships

We are partnering with SHC where we will evaluate the use of computer vision technology to characterize patient-oriented activities of patients admitted to the SCH ICU rooms that are equipped with the computer vision sensors. We aim to develop a new vision and machine learning algorithm that benefits from the amount and variety of real-world data of activities in the ICU to successfully develop motion activity models that perform well in clinical settings.

We have partnered with Intermountain's Healthcare Transformation Lab where we have deployed 3D depth sensors in eight ICU rooms. With the help of Intermountain, we are using live data streams to teach our computer vision algorithms to discern events of clinical relevance. Using multiple sensors per room, our artificial intelligence system is capable of full-room activity understanding.


Ehsan Adeli, PhD.

PAC Faculty

Scientist, Stanford AI Lab, Stanford Vision and Learning, Computer Science Department
Clinical Assistant Professor, Department of Psychiatry and Behavioral Sciences, Stanford School of Medicine

Dev Dash, MD, MPH

PAC Faculty

Dr. Dash is an emergency medicine physician
Clinical Assistant Professor, Emergency Medicine


Dev grew up in Montreal, Canada, and received his MD from Baylor College of Medicine. He earned a Master's in Public Health from Harvard University School of Public Health. He has a significant quantitative sciences background (physics, epidemiology) and he initially began residency in neurosurgery but switched to Emergency Medicine and is fellowship trained in Clinical Informatics at Stanford. He has a strong interest in operationalizing effective and sustainable machine learning integrated workflows in the healthcare setting.


Shrinidhi Lakshmikanth

PAC Data Engineer

Interested in Machine Learning / Deep Learning. Working on cloud infrastructure for data collection.

Tracy Terada

Research Operations Manager

Tracy is a 15+ year adminstrative veteran for the Stanford School of Medicine.  She started at the Lane Medical Library and is currently with the Clinical Excellence Research Center. 


Zane Durante

Doctoral Student, Computer Science

Zane is a first-year PhD student in Computer Science and is currently rotating with Prof. Fei-Fei Li. His research interests include self-supervised learning, multimodal signal processing, and AI for healthcare. Admitted Autumn 2021.

Alan Luo

Doctoral Student, Computer Science

Alan is a Ph.D. student in the Stanford Vision and Learning Lab, advised by Prof. Fei-Fei Li. His main research interests include weakly-supervised learning, transfer learning, and deep learning.

Zhuoyi Huang

Graduate Student, Computer Science

Zhuoyi is a master's student major in computer science, her research interests lay in data-driven machine learning, computer vision and reinforcement, as well as their applications in health care and clinical trials.

Ruochen (Chloe) Liu

Graduate Student, Computer Science

Chloe is a Master’s student in computer science and she is interested in the intersection of machine learning and social good.

Neha Srivathsa

Undergraduate Student, Computer Science

Neha is a member of PAC, Her research interests lie in human-centered artificial intelligence and machine learning, toward improving health outcomes.


A Computer Vision System for Deep Learning-based Detection of Patient Mobilization Activities in the ICU


Serena Yeung*, Francesca Rinaldo*, Jeffrey Jopling, Bingbin Liu, Rishab Mehra, N. Lance Downing, Michelle Guo, Gabriel M. Bianconi, Alexandre Alahi, Julia Lee, Brandi Campbell, Kayla Deru, William Beninati, Li Fei-Fei and Arnold Milstein

Nature Partner Journals (NPJ) Digital Medicine
March 2019

Descriptive Analysis of ICU Patient Mobilization from Depth Videos

Laëtitia Shao*, Zaid Nabulsi*, Ruchir Rastogi*, Bingbin Liu, Francesca Rinaldo, Serena Yeung, N. Lance Downing, William Beninati, Arnold Milstein, Li Fei-Fei

Machine Learning for Health Workshop, Neural Information Processing Systems (NeurIPS); December 2018

3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities

Bingbin Liu*, Michelle Guo*, Edward Chou, Rishab Mehra, Serena Yeung, N. Lance Downing, Francesca Salipur, Jeffrey Jopling, Brandi Campbell, Kayla Deru, William Beninati, Arnold Milstein, Li Fei-Fei

Machine Learning for Healthcare (MLHC) Conference; August 2018

Vision-Based Prediction of ICU Mobility Care Activities using Recurrent Neural Networks

Gabriel M. Bianconi, Rishab Mehra, Serena Yeung, Francesca Salipur, Jeffrey Jopling, Lance Downing, Albert Haque, Alexandre Alahi, Brandi Campbell, Kayla Deru, William Beninati, Arnold Milstein, Li Fei-Fei

Machine Learning for Health Workshop, Neural Information Processing Systems (NIPS);   December 2017