Intensive Care Unit Clinical Pathway Support
Activity detection in Intensive Care Units (ICUs) is currently performed manually by trained personnel, primarily nurses, who log the activities as they occur. This process is both expensive and time consuming. Our goal is to design a system which automatically gives an annotated list of all activities that occurred in the ICU over the day. Overall, this system will reduce the monitoring workload of trained personnel, and lead to a quicker and safer recovery of the patient, while providing benefits such as activity-based costing.
Activity recognition in hospitals is a task that has not received much attention in the past. Some of the main reasons for this gap in research are the lack of sensors installed in hospitals and the difficulty in obtaining access to the relevant data due to its sensitive nature. Thanks to our partnering hospital, we have access to depth sensors installed in eight intensive care unit (ICU) rooms.
We are developing a computer vision system capable of automatically detecting the following activities:
- Stage 1: patient getting out of bed, patient getting out of bed and walking, and a nurse performing oral care.
- Stage 2: clinician performing ultrasound, x-ray, turning patient over in bed, and patient getting in/out of bed.
- Stage 3: various patient mobility activities such as patient getting in/out of a bed/chair with or without assistance.
Once our system can successfully log the basic activities above, we plan to expand it to detect anomalies such as emergency situations. To do so, we could potentially use a dataset of simulations of different emergencies (e.g., patient falls on the floor).
We are partnering with SHC where we will evaluate the use of computer vision technology to characterize patient-oriented activities of patients admitted to the SCH ICU rooms that are equipped with the computer vision sensors. We aim to develop a new vision and machine learning algorithm that benefits from the amount and variety of real-world data of activities in the ICU to successfully develop motion activity models that perform well in clinical settings.
We have partnered with Intermountain's Healthcare Transformation Lab where we have deployed 3D depth sensors in eight ICU rooms. With the help of Intermountain, we are using live data streams to teach our computer vision algorithms to discern events of clinical relevance. Using multiple sensors per room, our artificial intelligence system is capable of full-room activity understanding.
Amit Kaushal, MD, PhD.
Clinical Assistant Professor
Amit Kaushal, MD, PhD is a Clinical Assistant Professor of Medicine (Stanford-VA) and Adjunct Professor of Bioengineering at Stanford University. Dr. Kaushal's work spans clinical medicine, teaching, research, and industry.
Ali Al-Rajhi, PhD, MPH
Research Project Manager
Dr. Ali Al-Rajhi has developed experience leading cross-functional teams in conducting clinical trials and publishing healthcare policy and systematic reviews. Dr. Al-Rajhi has worked in the fields of ophthalmology with the American Academy of Ophthalmology; and at the Stanford Center for Clinical Research (SCCR) overseeing trials in gastroenterology and hepatology. His interest is at the crossroads of data science and healthcare, where he currently splits his time with Stanford’s Clinical Excellence Research Center (CERC) and Stanford Partnership in AI-Assisted Care (PAC).
PAC Data Engineer
Interested in Machine Learning / Deep Learning. Working on cloud infrastructure for data collection.
Linden Sky Li
Undergraduate Student, Computer Science
A Computer Vision System for Deep Learning-based Detection of Patient Mobilization Activities in the ICU
Serena Yeung*, Francesca Rinaldo*, Jeffrey Jopling, Bingbin Liu, Rishab Mehra, N. Lance Downing, Michelle Guo, Gabriel M. Bianconi, Alexandre Alahi, Julia Lee, Brandi Campbell, Kayla Deru, William Beninati, Li Fei-Fei and Arnold Milstein
Nature Partner Journals (NPJ) Digital Medicine
Laëtitia Shao*, Zaid Nabulsi*, Ruchir Rastogi*, Bingbin Liu, Francesca Rinaldo, Serena Yeung, N. Lance Downing, William Beninati, Arnold Milstein, Li Fei-Fei
Machine Learning for Health Workshop, Neural Information Processing Systems (NeurIPS); December 2018
Bingbin Liu*, Michelle Guo*, Edward Chou, Rishab Mehra, Serena Yeung, N. Lance Downing, Francesca Salipur, Jeffrey Jopling, Brandi Campbell, Kayla Deru, William Beninati, Arnold Milstein, Li Fei-Fei
Machine Learning for Healthcare (MLHC) Conference; August 2018
Gabriel M. Bianconi, Rishab Mehra, Serena Yeung, Francesca Salipur, Jeffrey Jopling, Lance Downing, Albert Haque, Alexandre Alahi, Brandi Campbell, Kayla Deru, William Beninati, Arnold Milstein, Li Fei-Fei
Machine Learning for Health Workshop, Neural Information Processing Systems (NIPS); December 2017