December 8, 2024

Harmony Thrive

Superior Health, Meaningful Life

AI Algorithms Used in Healthcare Can Perpetuate Bias

AI Algorithms Used in Healthcare Can Perpetuate Bias

The AI algorithms increasingly used to treat and diagnose patients can have biases and blind spots that could impede healthcare for Black and Latinx patients, according to research co-authored by a Rutgers-Newark data scientist. 

Fay Cobb Payton, a Mathematics and Computer Science professor, has researched how AI technology and algorithms often rely on data that can lead to generalizations about patients of color, failing to incorporate their cultural background and day-to-day living circumstances. 

Payton, who is Special Advisor to the Chancellor on Inclusive Innovation at Rutgers-Newark,  recently co-authored findings on AI and healthcare inequities for The Milbank Quarterly, which explores population health and health policy. Additional authors were Thelma C. Hurd of the Institute on Health Disparities, Equity, and the Exposome at Meharry Medical College, and Darryl B. Hood of the College of Public Health at Ohio State University.

Payton is co-founder of the Institute for Data, Research and Innovation Science (IDRIS) at Rutgers, which combines interdisciplinary research in the fields of medicine, public health, business, cultural studies, and technology. Part of its mission is to find the best ways data can be used to serve community and uncover intersections of data, technology and society across fields.

The study co-authored by Payton found that because there is a lack of representation among AI developers and Black and brown patients are underrepresented in medical research, the algorithms can perpetuate false assumptions and can lack the nuances that can be provided by a more diverse field of developers and patient data. Health care providers can also play an important role in ensuring that treatment transcends the algorithm.

“How is the data entering into the system and is it reflective of the population we are trying to serve?’’ asked Payton. “It’s also about a human being, such as a provider, doing the interpretation. Have we determined if there is a human in the loop at all times? Some form of human intervention is needed throughout.’’

Algorithms rely on “big data,” such as medical records, imaging and biomarker values. But they don’t incorporate “small data,” such as social determinants of health, including access to transportation, healthy food, and a patient’s community and work schedule, according to the study. This may make it more difficult for patients to comply with treatment plans that require frequent doctor visits, physical activity, and other measures.

“It doesn’t account for the cost of fresh produce. It may not account for the fact that someone does not have access to transportation but is working two jobs. They may be trying to do everything that the doctors say but it’s assumed that they’re not adhering because no one talked to them about the why,’’ said Payton.

“This creates a ‘trope’ characterization of Black patients and can impact patients’ receptions of the health system’s trustworthiness,’’ according to the study.

“The algorithm may come up with a proposed treatment plan,’’ Payton said. “It may indicate what resources should be used to treat the patient, and those recommendations might not take into account where  they live, work, and play.’’

Without socioeconomic considerations based on patients’ daily lives, treatment could suffer. But outcomes can improve with more information. For instance, doctors might prescribe longer-acting medication and interventions that don’t require travel, said Payton.

Algorithmic bias can also fail to account for disparities in healthcare outcomes, such as an overall mortality rate that is nearly 30 percent higher for non-Hispanic Black patients versus non-Hispanic white patients, a figure that can also be attributed to higher rates of certain illnesses.

“By and large, they experience more cases of heart disease, stroke and diabetes. Black females experience more severity in breast cancer,’’ said Payton. “This is what the data is saying, however, the preliminary research shows that algorithms may be racially biased, even when Black patients are sicker than the rest of the population. It can lead to them being misdiagnosed, getting access to adequate resources,  or delays in treatment.”

The algorithm’s failure to consider a patient’s location raises additional concerns.

“Most U.S. patient data comes from three states: California, Massachusetts and New York. “That in itself is problematic. If they’re in rural Mississippi, they might not be able to catch a reliable train or a bus.Those kinds of things impact care delivery and potentially the quality of care,’’ Payton said.

One solution is greater diversity among tech developers but also physicians according to research by Payton and her colleagues. 

Only 5 percent of active physicians in 2018 identified as Black, and about 6 percent identified as Hispanic or Latinx, according to sources cited in the study. The percentage of underrepresented developers is even lower.

“Understanding the biases that exist in traditional education and among healthcare delivery professionals is important,’’ says Payton. “It is critical that developers have domain and technical skills to better understand healthcare (and the domain in question).”’’

There also must be a more rigorous process to review and assess the data being supplied to algorithms so they aren’t subject to biases that can exacerbate healthcare disparities, according to research by Payton and her colleagues.

With the right safeguards in place, the possibilities for AI to help patients can grow, said Payton.  

The research offers a list of recommendations that can reduce potential harms, including collective action among health stakeholders, developers, end users and policymakers throughout the AI life cycle. It has already made a difference in the lives of some patients.

“Predictive analytics can analyze large datasets for patterns and risk factors associated with diseases. AI can assist medical professionals to analyze images for disease conditions. Generative AI (GenAI) using text, images, audio and event videos can  inform patient monitoring,’’ said Payton. “Despite concerns of biases, privacy, security, and several other concerns, AI can have  the potential to do good.”
 

link

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Newsphere by AF themes.