Skip to main content
maarten-van-den-heuvel-73123-unsplash

Insights

Patient Care March 14, 2018

Stanford researchers probe the ethics of using artificial intelligence in medicine

By Patricia Hannon

Physicians should consider the ethical challenges of using artificial intelligence in making patient care decisions, three Stanford University School of Medicine researchers say in a perspective piece in The New England Journal of Medicine.

No encounter I've ever had with my physician has been recorded, nor has there ever been a scribe — human or electronic — taking notes on everything we've said.

Still, I'm not naive enough to believe that information from my record isn't ending up in some huge database to be tapped for making predictions about outcomes for other patients. In fact, I know that it is already being shared. And I'm OK with that.

But now, three Stanford University School of Medicine researchers are calling for a national conversation about the ethics of using artificial intelligence in medicine today.

In a perspective piece appearing in The New England Journal of Medicine, the authors acknowledge the tremendous benefit that machine-learning tools can have on patient health, but they say the full benefit can't be realized without careful analysis of its use.

"Because of the many potential benefits, there's a strong desire in society to have these tools piloted and implemented into health care," said lead author Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine, in our news release. "But we have begun to notice, from implementations in non-health care areas, that there can be ethical problems with algorithmic learning when it's deployed at a large scale."

The press release explains:

David Magnus, PhD, senior author of the piece and director of the Stanford Center for Biomedical Ethics, says bias can play into health data in three ways: human bias; bias that is introduced by design; and bias in the ways health care systems use the data. 'You can easily imagine that the algorithms being built into the health care system might be reflective of different, conflicting interests,' says Magnus.

The authors' concerns include that data could become an "actor" in the doctor-patient relationship and in clinician decision-making, with the potential for data to unintentionally be given more authority than human experience and knowledge.

"The one thing people can do that machines can't do is step aside from our ideas and evaluate them critically," Char told me.

Another challenge is that clinicians might not understand the intentions or motivations of the designers of the machine-based tools they're referencing. For example, a system might be designed to cut costs or to recommend certain drugs, tests or devices over others, something clinicians wouldn't necessarily know.

The authors acknowledge the social pressure to incorporate the latest tools in order to provide better health outcomes for patients. But they urge physicians to become educated about the construction of machine-learning systems and about their limitations.

"Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes," they write.

Co-author Nigam Shah, MBBS, PhD, associate professor of medicine, added that models are only as trustworthy as the data being gathered and shared: "Be careful about knowing the data from which you learn."

Photo by Maarten van den Heuvel

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.

hannon-patty

Associate editor

Patricia Hannon

Senior associate editor Patricia Hannon helps edit Stanford Medicine magazine as well as the Stanford Medicine News Center; she is also a manager of special projects. She is a San Jose State University graduate in journalism and anthropology, and a Pulitzer Prize-winning journalist who joined Stanford Medicine in 2017 from The (San Jose) Mercury News and Bay Area News Group. She is an expert in digital publishing, newsroom operations and managing crisis communications, having navigated a Bay Area-wide team of breaking news editors, reporters and photographers through the organizational shift into a digital-first publishing model. In her more than 20-year tenure in newsrooms in the Bay Area and South Carolina, she managed teams covering a variety of topics including government, law enforcement, education, religion, health and natural disasters. A San Jose native and fifth-generation Californian, she enjoys live music, especially when her two musician sons are performing; hiking in the spectacular Bay Area parks; traveling with or to visit friends and family; and supporting the San Francisco Giants, win or lose.