COGNITIVE WORLD

View Original

Trust Is Good; Control Is Better

IMAGE: DEPOSITPHOTOS ENHANCED BY COGWORLD

Risk Mitigation Strategies for Artificial Intelligence Solutions in Healthcare Management

There are a growing number of examples of how Artificial Intelligence Solutions (AIS) can assist in improving healthcare management, including early diagnosis, chronic disease management, hospital re-admission reduction, efficient scheduling and billing procedures, and effective patient follow-ups, while attempting to achieve healthcare's quintuple aim.

Healthcare organizations aim to increase patient safety and reduce risks while decreasing costs and increasing revenue with AIS. However, implementing AIS ethically and deploying it globally risk-free is challenging and an area of concern for healthcare organizations. With the far-reaching context of AIS, the risks are on the radar of regulatory, compliance and global authorities. 

FIGURE: ARTIFICIAL INTELLIGENCE SOLUTIONS
CREDIT: AUTHORS

Unethical AIS can lead to misdiagnosis, inappropriate treatment recommendations, and adverse health and financial outcomes for patients. It can also perpetuate existing disparities in healthcare access and outcomes and violate patients' privacy rights.

Often, training data for AIS are intertwined in government, private sector, research, healthcare, and other organizations. When AIS contains biased data — not addressing the needs, culture, linguistics, and gender diversity of populations — then AIS has the potential to turn into a biased tool with potential risks for the population it exists to serve.

To ensure that AIS in healthcare management are ethical Artificial Intelligence Solutions (e-AIS), it is important to design, develop, and deploy AIS in alignment with ethical principles. This includes using unbiased data and algorithms transparently, involving diverse stakeholders in their development and validation, ensuring appropriate levels of human oversight and decision-making, and safeguarding patient privacy and autonomy.

To design, develop and deploy e-AIS requires mindful consideration of ethical principles, best practices and risk mitigation strategies. Key considerations for e-AIS are as follows:

  1. Identify a problem or the appropriate use case that e-AIS will be designed to address.

  2. Assemble a diverse and inclusive team to ensure that AIS are designed, developed, and deployed with ethical considerations in mind. Teams should include individuals with a range of multidisciplinary backgrounds and perspectives.

  3. Build transparency and explainability into AIS to ensure that its decisions and recommendations can be easily understood, explained, and validated by healthcare professionals and other stakeholders.

  4. Implement robust data governance policies to ensure that training data are accurate, unbiased, and representative of the population it is intended to serve. Disclosure along the AI journey is needed to maintain transparency, build trust, and have accountability boundaries set out explicitly in the AIS landscape.

  5. Protect patient privacy by using appropriate data security and privacy measures when AIS are designed, developed and deployed.

  6. Implement robust security measures to protect patient data and ensure that AIS are not vulnerable to cyberattacks or other security breaches.

  7. Establish clear guidelines and standard operating procedures for the use of e-AIS, including policies for dealing with any ethical or legal issues that may arise.

  8. Develop mechanisms for patients, healthcare professionals, and other stakeholders to provide feedback, regular monitoring, and risk evaluation. Ensure human oversight is applied before final decisions are made.

  9. Audit of AIS should be conducted by an independent, external third-party auditor who has expertise in AIS and healthcare management. Audit being an after-control, AIS should have built-in robust controls to maintain minimal to no-risk and high accuracy of results with AIS.

  10. Engage with regulatory bodies and industry associations to stay UpToDate on best practices, guidelines, and regulations related to the use of e-AIS in healthcare management.

In summary, by employing risk mitigation strategies, healthcare organizations can help ensure that their AIS operates ethically and is aligned with the law, regulations, and the values and mission of the healthcare organization to provide safe, high-quality, and effective patient care. Designing, developing, and deploying e-AIS for healthcare management requires a commitment to transparency, accountability, and the protection of patient rights, privacy, and safety. By taking a mindful and cautious approach to the design, development and deployment of AIS, healthcare organizations can best leverage the power of AI to improve cost-effectiveness and patient outcomes while minimizing ethical risks.


Authors and affiliations:

Dr. Doreen Rosenstrauch
Founder and CEO, DrDoRo®Institute | LinkedIn

Atul Gupta
Consultant, Government of Canada | LinkedIn

Ariana Smetana
Business Innovation and Digital Transformation Advisor, CEO/Founder, AccelIQ.Digital | LinkedIn

Utpal Mangla
General Manager, Industry EDGE Cloud, IBM Cloud Platform | LinkedIn