A Focus on Responsibility, Trust, and Safety
As artificial intelligence (AI) becomes increasingly integral to healthcare, the urgency to integrate ethical governance cannot be overstated.
Healthcare and health technology companies must embrace an in-depth understanding of embedding responsibility, ethics, and fairness in AI’s lifecycle. We are at the threshold of a significant transformation in AI’s application and impact, necessitating a balance between its benefits and a commitment to ethical development and use.
The advent of AI in healthcare will serve to be a revolution—a true paradigm shift in patient care research, reducing our health system’s complexity and increasing administrative efficiency. However, the rapid evolution of AI technologies brings complex ethical dilemmas to the forefront.
Commitment to Ethical AI
I lead an organization whose purpose is to simplify the business of care. While we serve many health plan customers, including nine of the top 10 payers, we heavily invest in AI to bridge health plan-care provider friction, increase stakeholder savings, and reduce complexity for greater consumer understanding and empowerment.
AI’s role in healthcare is multifaceted, offering advancements in diagnostic precision, tailored treatment plans, improved financial experiences—for all stakeholders—and improved patient outcomes. The optimism surrounding AI’s impact on healthcare is substantial, reflecting its data analysis, prediction, and clinical support capabilities. However, alongside these opportunities, there is a critical need to address the ethical implications of AI’s integration into sensitive areas like patient care and data handling.
Understanding the Ethical Imperative
The trust deficit in AI technologies within healthcare settings is significant. According to recent reports, more than 60% of patients lack trust of AI in healthcare. This skepticism is rooted in concerns over data privacy, potential biases, and the lack of transparency in AI decision-making processes. The ethical deployment of AI thus becomes not merely a technical challenge but a moral and societal obligation.
In recent studies by the Journal of Medical Internet Research and the Journal of Consumer Research, mistrust in medical AI systems arises from concerns about the systems themselves and the practices of the companies developing these technologies. Respondents highlight concerns regarding data privacy, challenges in collecting high-quality and accurate medical data, and the perception that technology companies place greater emphasis on profit than human well-being.
As powerful as the technology in AI is getting, companies must remember that it is human-to-human interactions, in-person and digital, that drive the very essence of healthcare. AI in healthcare must prioritize human interactions, necessitating a foundation of responsibility and ethics in AI’s creation, testing, deployment, and monitoring.
The RAISE Benchmarks: A Strategic Tool for AI Safety
In response to these challenges, the Responsible AI Institute has introduced the RAISE Benchmarks to facilitate responsible AI development and deployment.
These benchmarks, including the Corporate AI Policy Benchmark, LLM Hallucinations Benchmark, and Vendor Alignment Benchmark, are pivotal in guiding organizations toward compliance with global standards and addressing challenges in generative AI and large language models.
- RAISE Corporate AI Policy Benchmark. This tool evaluates the scope and alignment of a company’s AI policies with the RAI Institute’s model enterprise AI policy, which the NIST AI Risk Management Framework informs. It guides organizations in framing AI policies encompassing trustworthiness and risk considerations unique to generative AI and LLMs.
- RAISE LLM Hallucinations Benchmark. This benchmark addresses the risk of AI hallucinations, a common issue in LLMs, which can lead to misleading outputs. It assists organizations in assessing and minimizing these risks in AI-powered products and solutions.
- RAISE Vendor Alignment Benchmark. It evaluates whether supplier organizations’ AI policies align with their clients’ ethical and responsible AI policies, ensuring a harmonious AI practice across the supply chain.
Deepening Regulatory and Policy Frameworks
To harness AI’s potential ethically, healthcare leaders must navigate an evolving landscape of regulatory and policy frameworks. Initiatives like President Biden’s Executive Order on AI, the European Union’s AI Act, Canada’s Artificial Intelligence and Data Act, and the UK AI Safety Summit underscore the growing global focus on safe and responsible AI development.
Aligning with standards such as the NIST AI Risk Management Framework and the upcoming ISO 42001 family of standards is crucial for healthcare organizations.
Building Trust Through Advanced Education and Engagement
Educating healthcare professionals and the public about AI’s capabilities and limitations is paramount in building trust. This education should be comprehensive, addressing AI’s benefits and challenges and empowering patients with knowledge about how AI impacts their care.
The Role of Leadership in Ethical AI Integration
Senior business and technology leaders are critical in steering their organizations toward ethical AI practices. Leadership commitment to ethical AI principles, transparent communication, and continuous evaluation of AI systems is vital in building a culture of trust and accountability.
While savings and business results are of primary importance, it is also important to think about the type of defensible principles that usable solutions are anchored to. Because AI is as powerful as it is now and will become, many regulatory agencies and consumer watch groups will keep a keen eye on human controls, data protection, algorithmic/data biases, responsible design and monitoring, and impact on an individual and systemic level.
AI in healthcare transcends technology; it’s a new era in patient care and efficiency. Leaders must steer their organizations towards ethically harnessing AI’s potential. The RAISE Benchmarks offer a practical framework for this endeavor, balancing benefits, risk mitigation, and trust-building.
link