Risks of AI in Healthcare
- Dr Gillie Gabay

- Dec 1
- 5 min read
Are you hearing the term "AI" around you daily? Do you have a positive attitude toward AI, or are you still undecided?
The rapid integration of Artificial Intelligence (AI) across clinical and administrative workflows presents unparalleled opportunities for efficiency and diagnostic accuracy. But the deployment of unmitigated AI poses significant risks to patient safety, ethical bias, and regulatory noncompliance. While efficiency and accuracy gains are possible, the potential pitfalls require careful consideration. The conversation around AI in healthcare should be more balanced, reflecting not only its incredible potential but also the significant risks it entails. This paper presents the risks of adopting AI, clustering them around four critical areas that directly affect patient outcomes and institutional credibility: accuracy, algorithmic bias, regulatory liability, and cyber-attacks. Below, I elaborate on each risk and propose a framework to cope with them.
Risks of AI in Healthcare
Accuracy refers to the time it takes for an AI system to achieve predictive accuracy after a change in the underlying data distribution or a gap between input features and the targeted outcomes. For example, a change in the prevalence of a disease, new treatment protocols, or a shift in the quality of imaging can cause the AI model to become obsolete without any warning.
An algorithmic bias due to systematic errors can create unfair outcomes for specific demographic cohorts (e.g., gender, socioeconomic status) that were underrepresented in the model's training data. This compromises the fundamental ethical principle of justice in healthcare and exposes the institution to significant regulatory and public relations risk.
Regarding liability, while regulators provide guidance on AI as a medical device, the accountability for harm caused to patients by a compromised AI output rests with the provider who implements the AI and the clinician utilizing it. A lack of procedures that demonstrate due diligence is a must to avoid such harm. Many AI models, however, cannot provide a transparent, human-understandable reason for their specific output. The inability to explain how and why a certain diagnostic index score was generated for patients prevents a thorough clinical validation. This inability inhibits a necessary root cause analysis following a critical incident the patient encounters, or an event of a near-missed error in care.
Also, like in any industry, heavy reliance on proprietary, third-party AI services introduces single points of operational failure. If, for example, a vendor's system experiences downtime, clinical processes that depend on an immediate output, such as an emergency department triage prioritization, may cease to function, resulting in operational paralysis and potential patient harm that no hospital can contain. Also, AI models rely on continuous, high-volume data ingestion. If the flow between data sources, feature stores, and model trainers is not rigorously secured, the system becomes vulnerable to injection attacks or unauthorized data modification, leading to poisoned models or data leakage.
As for the risk of cyber-attacks, the deployment of AI dramatically heightens cyber risk by introducing new forms of vulnerability that circumvent traditional perimeter defenses. A highly targeted cybersecurity risk, where malicious actors inject subtle, non-obvious "noise" into the input data that alters a medical image or entry to electronic health records of patients, forces the AI model to produce a critically incorrect result, such as a missed diagnosis or an inappropriate treatment recommendation.
Sophisticated cyber attackers can query a live production model repeatedly and analyze its outputs to infer the model's underlying architecture, parameters, and algorithms. This may result in the loss of proprietary intellectual property without ever accessing the source code, posing a major competitive and financial threat. If endpoints are not secured with strict rate limiting, robust authentication, and input validation, they offer an easy entry point for evasion attacks. Figure 1 presents the AI risks.

Figure 1. AI Risks in Healthcare
Beyond the above risks, AI can misinterpret data, miss subtle cues, or generate irrelevant correlations, leading to incorrect diagnoses or treatment plans. Healthcare providers might trust or rely on an AI's output to a large extent, leading to failure to apply their own critical judgment and experience, especially when the AI is wrong. There is also a risk of integrating unproven or poorly tested AI tools into clinical workflows without proper validation and oversight. Moreover, healthcare is fundamentally a human-centered profession that requires empathy and nuanced communication. Overuse of AI, particularly in mental health chatbots or administrative tasks, may erode the critical human-to-human connection, trust, and empathetic care that patients need, especially during vulnerable moments. Addressing these risks requires comprehensive regulatory frameworks, ethical guidelines, diverse and high-quality training data, and a commitment to transparency and clinical validation before widespread adoption.
Actionable Recommendations for Executives
To avoid such risks, the top executives must centralize AI decision-making and promote a multidisciplinary framework that promotes systemic thinking that considers clinical aspects, legal aspects, ethics, and technology. Thus, all AI tools, whether purchased or developed, must undergo rigorous, pre-deployment validation using patient data to test for performance and fairness across diverse demographics. Top executives must formally define the role of clinicians in overseeing each AI output, ensuring the technology augments decision-making, rather than replacing clinical judgment. Such steps will ensure the necessary infrastructure for the responsible use of AI.
Managers of clinical workflows who are responsible for patient safety should integrate a new category into the existing safety reporting system specifically for AI-related errors, anomalies, or outputs that were overridden by a clinician. When AI is supporting a clinical decision, to ensure transparency at the point of care, executives should mandate clear disclosure to staff and patients. Further, whenever an AI system produces a result that deviates significantly from a clinician's expectation, institute protocols should require a secondary review by a human expert.
Data Managers who focus on system integrity are called upon to deploy real-time dashboards to track AI model performance (accuracy). They should set clear, automated thresholds for retraining models that demonstrate a performance decline. All AI vendors are to provide documentation on the training data of their model, bias mitigation steps, and clear liability terms. Vendors who offer the capabilities of an explainable AI should be prioritized. Strict data governance should be ensured, focusing on secure, de-identified data streams for training, and robust access controls to prevent unauthorized access or adversarial attacks.
Managers with high accountability should provide mandatory training for all staff on AI capabilities, limitations, and how to safely question or override AI-generated outputs. They are to clearly document who is accountable for decisions resulting from AI input. In all current applications, the clinician who accepts the AI recommendation remains ultimately accountable for patient care. Last, work with legal teams to update patient consent forms to include clear, jargon-free language about how patient data is used to train and refine institutional AI models. All managers should raise awareness of cyber-attacks, assimilate an organizational culture of caution, and assign professionals (e.g., cyber methodologists) to prepare all employees to respond effectively in the era of cyber.
Additional Reading
Dalky A, Osaid Malkawi RM, Alrawashdeh A, ALBashtawy M, Hani SB. Perceptions, barriers, and risk of artificial intelligence among healthcare professionals: A cross-sectional study. Digital Health. 2025 Jul;11:20552076251360924.
Gupta S, Kamboj S, Bag S. Role of risks in the development of responsible artificial intelligence in the digital healthcare domain. Information Systems Frontiers. 2023 Dec;25(6):2257-74.
Siafakas N, Vasarmidi E. Risks of Artificial Intelligence (AI) in Medicine. Pneumon. 2024 Jul 1;37(3).








