Support teaching, research, and patient care.
Jane Paik Kim, Ph.D, is a Clinical Associate Professor at Stanford University School of Medicine in the Department of Psychiatry and Behavioral Sciences. Her research focuses on empirical approaches to address critical ethical questions and challenges that are interlinked with emerging clinical innovation opportunities (e.g., integration of innovative technologies in healthcare settings) and public health needs (e.g., development of precision medicine interventions). Another focus of her work is the integration of statistical learning approaches for behavioral interventions delivered via digital platforms. With colleagues, Dr. Kim leads an NIH-funded project to understand ethical implications of incorporating ML/AI technology within clinical practice. She also currently serves as a Co-Investigator in a range of NIH-funded interdisciplinary projects in behavioral health and psychiatry.
The potential for artificial intelligence applications, specifically machine learning, to prevent, predict, and help manage disease sparks immense hope not only for the individuals affected, but also for the overall health of populations. Particularly exciting examples of these novel computing strategies are increasingly found in the development of deep learning algorithms for medical use. Already embedded in our daily lives, algorithms have begun to impact human-decision making, from recruitment and hiring of employees to criminal sentencing. Outside of medicine, recognition of the ways algorithms may reflect, reproduce, and perpetuate bias has led to an explosion of theoretical and empirical research on the subject. There is an increasing awareness of potential algorithmic weaknesses, including some that raise concerns about fundamental issues of fairness, justice, and bias. The need to anticipate and address emerging ethical issues in algorithmic medicine is time- sensitive. As health care systems increasingly utilize algorithms for patient identification, diagnosis, and treatment direction, the consequences of algorithmic bias yield real and significant costs. Numerous stakeholders are responsible for the development, application and interpretation of algorithms in medicine, and yet there has been very little engagement of stakeholders most affected by these learning systems and tools. The overarching goal of this empirical and hypothesis driven project is to articulate the landscape of ethical concerns and the issues emerging in the context of the development, refinement, and application of machine learning in algorithmic medicine. First, we determine the distinct ethical issues and problems encountered in the development, refinement, and application of machine learning, by querying the perspectives of a diverse array of stakeholders involved—machine learning researchers, clinicians, ethicists, and patients. Using the new insights generated from the first half, we will conduct an evidence-based, information-sharing vignette survey to understand the impact of the contexts of algorithms on the ethically salient perspectives of physicians—those poised to implement such innovation in their own decision-making for the care of patients. Maximizing our established record of expertise in empirical ethics investigations, this sequence of projects leverages access to the exceptional machine learning research conducted at Stanford University, including work by NIH-funded investigators, and provides extensive, systematically collected data on ethical issues encountered and anticipated throughout the development and implementation of algorithms. Finally, the project develops and refines an evidence-informed information-sharing survey for use in better understanding how physicians react to intelligent systems.
Dr. Kim’s research focuses on applying statistical approaches to evaluate and improve digital interventions, and using empirical approaches to understand ethical considerations for AI applications in healthcare.