Nurses Warn Patient Safety at Risk as AI Use Spreads in Health Care
Kaiser Permanente, one of the largest employers in San Francisco, Alameda and other Bay Area counties, has been an early adopter of AI. Company officials have said they rigorously test the tools they use for safety, accuracy and equity.
“Our physicians and care teams are always at the center of decision-making with our patients,” a Kaiser Permanente statement said in response to a KQED request for comment. “We believe that AI may be able to help our physicians and employees and enhance our members’ experience. As an organization dedicated to inclusiveness and health equity, we ensure the results from AI tools are correct and unbiased; AI does not replace human assessment.”
One program in use at 21 Kaiser hospitals in Northern California is the Advance Alert Monitor, which analyzes electronic health data to notify a nursing team when a patient’s health is at risk of serious decline. The program saves about 500 lives per year, according to the company.
But Gutierrez Vo said nurses have flagged problems with the tool, such as producing inaccurate alarms or failing to detect all patients whose health is quickly deteriorating.
“There’s just so much buzz right now that this is the future of health care. These health care corporations are using this as a shortcut, as a way to handle patient load. And we’re saying ‘No. You cannot do that without making sure these systems are safe,’” said Gutierrez Vo, a nurse with 25 years of experience at the company’s Fremont Adult Family Medicine clinic. “Our patients are not lab rats.”
The U.S. Food and Drug Administration has authorized some AI-generated services before they go to market, but mostly without the comprehensive data required for new medicines. Last fall, President Joe Biden issued an executive order on the safe use of AI, which includes a directive to develop policies for AI-enabled technologies in health services that promote “the welfare of patients and workers.”
“It’s very good to have open discussions because the technology is moving at such a fast pace, and everyone is at a different level of understanding of what it can do and [what] it is,” said Dr. Ashish Atreja, Chief Information and Digital Health Officer at UC Davis Health. “Many health systems and organizations do have guardrails in place, but perhaps they haven’t been shared that widely. That’s why there’s a knowledge gap.”
UC Davis Health is part of a collaboration with other health systems to implement generative and other types of AI with what Atreja referred to as “intentionality” to support their workforce and improve patient care.
“We have this mission that no patient, no clinician, no researcher, no employee gets left behind in getting advantage from the latest technologies,” Atreja said.
Dr. Robert Pearl, a lecturer at the Stanford Graduate Business School and a former CEO of The Permanente Medical Group (Kaiser Permanente), told KQED he agreed with the nurses’ concerns about the use of AI at their workplace.
“Generative AI is a threatening technology but also a positive one. What is the best for the patient? That has to be the number one concern,” said Pearl, author of “ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine,” which he said he co-wrote with the AI system.
“I’m optimistic about what it can do for patients,” he said. “I often tell people that generative AI is like the iPhone. It’s not going away.”
link