X Close Search

How can we assist?

Demo Request

AI Risks in ISO 14971 Framework

Post Summary

AI in healthcare is transforming medical devices, but it also introduces risks that traditional frameworks like ISO 14971 struggle to address. Here’s what you need to know:

  • AI Risks in Medical Devices: By mid-2025, over 878 AI/ML-enabled medical devices were FDA-cleared, yet 6.3% faced recalls due to algorithmic flaws. Issues like bias, data drift, and "black box" unpredictability make AI systems harder to manage than static devices.
  • ISO 14971 Overview: This global standard helps manage medical device risks, emphasizing design safety, protective measures, and safety information. However, it was created with static systems in mind, leaving gaps for AI-specific challenges.
  • AI-Specific Challenges: AI risks include poor training data, algorithmic bias, and performance declines (data drift) post-deployment. The lack of transparency in many AI models also complicates risk management.
  • Filling the Gaps: Standards like BS/AAMI 34971 complement ISO 14971 by addressing AI-specific risks such as over-reliance on AI (automation bias), adaptive learning, and data quality issues.
  • Cybersecurity Concerns: AI systems are vulnerable to attacks like adversarial inputs and data poisoning, which can compromise patient safety. Strong cybersecurity practices, including threat modeling and secure updates, are essential.

Key Takeaway: To manage AI risks effectively, ISO 14971 must be combined with AI-specific guidance and robust cybersecurity measures. This ensures safety and compliance throughout the lifecycle of AI-enabled medical devices.

AI Medical Device Risks and ISO 14971 Framework Statistics

AI Medical Device Risks and ISO 14971 Framework Statistics

How to optimize your medical device risk management with ISO 14971

ISO 14971

ISO 14971 as a Starting Point for AI Risk Management

ISO 14971:2019 is widely recognized as the go-to framework for managing risks associated with AI-enabled medical devices. Accepted by the FDA as the consensus standard, it offers a structured approach for identifying potential hazards and implementing necessary controls [3]. Dr. Stephen Odaibo, CEO & Founder of RETINA-AI Health, Inc., emphasizes its importance:

"Risk management is the underlying principle that governs the regulation of medical devices, whether traditional devices or Software as a Medical Device (SaMD)." [1]

This framework lays the groundwork for tackling the unique challenges AI brings to the table, which will be explored further in later sections.

Main Elements of ISO 14971

ISO 14971 outlines a step-by-step approach that spans the entire lifecycle of a medical device, from design to post-market activities. It defines risk as the combination of the likelihood of harm and its severity. The process includes several key steps:

  • Risk analysis: Identifying potential hazards and estimating associated risks.
  • Risk evaluation: Comparing those risks against predefined acceptability criteria.
  • Risk control: Applying and verifying measures to reduce risks.
  • Residual risk evaluation: Assessing the overall risk that remains after controls are in place.

The standard prioritizes risk control through a hierarchy: first, eliminate hazards via inherent safety by design; next, integrate protective measures directly into the device; and finally, provide safety information like warnings or user training materials [3].

A Risk Management File (RMF) serves as a comprehensive record of all risk-related activities, including traceability and verification. This ensures that risk management remains an ongoing process, rather than a one-time effort.

How ISO 14971 Applies to AI/ML Systems

When dealing with AI/ML systems, ISO 14971’s principles require adaptation to address the distinct risks these technologies introduce. The standard’s clauses align well with AI-specific challenges, such as algorithmic bias or adaptive learning, which demand tailored mitigation strategies. For instance:

  • Clause 5 (Risk Analysis): Extends to identifying hazards like bias in algorithms or issues with data quality.
  • Clause 7 (Risk Control): Focuses on implementing design changes or fail-safes to address AI-related errors.
  • Clause 10 (Production & Post-Production): Highlights the need to monitor AI systems for performance "drift" and assess the impact of real-world data.

In the context of AI, evaluating and mitigating unwanted bias in machine learning models becomes critical. Ensuring high-quality data throughout the training process is equally important. Techniques like Failure Modes and Effects Analysis (FMEA) or Fault Tree Analysis can help identify potential software failures and their implications for patient safety. Dr. Odaibo underscores the challenges unique to AI:

"The relatively rapid cycles of development and change of software introduce unique categories of risks and considerations that require special attention." [1]

This highlights the need for additional safeguards. AI systems, which evolve over time, require Predetermined Change Control Plans to ensure safety as algorithms adapt post-market. Furthermore, manufacturers must actively engage in post-market surveillance through methods like customer surveys, literature reviews, and performance monitoring, rather than relying solely on bug reports.

The next section will delve into AI-specific challenges that ISO 14971 does not fully address.

Filling the Gaps: What ISO 14971 Doesn't Cover for AI

ISO 14971 provides a solid framework for risk management in medical devices, but it primarily focuses on static systems. This leaves a noticeable gap when it comes to addressing the unique, dynamic risks associated with AI. Despite the high rate of FDA clearance for AI/ML devices, about 6.3% have been recalled due to issues specific to AI algorithms [2].

Risks During AI Development

The standard doesn't delve into risk management during critical AI development stages like data collection, labeling, feature selection, and model training. As MD101 Consulting points out:

"AI design is a data‑driven design. Thus, lots of risks stem from poor data quality or bias in data." [4]

AI systems differ from traditional software in that they can generate irrelevant or inaccurate outputs - sometimes referred to as "confabulations" - while still functioning normally [4]. A 2025 study of 691 FDA-cleared AI devices revealed that fewer than half disclosed key validation details such as sample size or demographics [2]. This lack of transparency makes it hard to assess if training data adequately represents the target patient population.

Data issues like poor quality, inconsistent labeling, and biases - such as imbalanced classes, selection bias, or variability among experts - create risks that traditional Failure Modes and Effects Analysis (FMEA) methods cannot fully address [4]. To mitigate these risks, manufacturers should implement robust data management practices, treating datasets as controlled documents within their Quality Management System. This includes ensuring version control and traceability [2][4].

Another major challenge is the opacity of deep learning models. This can lead to human-AI interaction risks, such as automation bias, where users may overly rely on AI recommendations and disregard conflicting clinical evidence [4]. As AI systems evolve, these risks don't stop at development - they extend into post-deployment scenarios.

Managing Changing AI Systems

AI systems face unique challenges after deployment, particularly because they evolve over time. ISO 14971 assumes software remains static post-deployment, but AI/ML systems often encounter model drift, where performance declines as real-world data diverges from the training data [4][5]. The standard doesn't offer guidance for managing adaptive learning systems that continuously evolve.

To address this gap, Predetermined Change Control Plans (PCCPs) have been developed. These plans outline anticipated model updates and validation protocols. Dr. Stephen G. Odaibo, CEO & Founder of RETINA‑AI Health, Inc., explains:

"The SPS pre‑specifies the changes the device manufacturer anticipates making to the device, while the ACP is the algorithm via which such change will be achieved." [1]

Effectively managing these systems requires proactive strategies, such as planned retraining cycles and continuous post-market surveillance. Manufacturers should use performance dashboards to monitor real-world metrics like sensitivity and specificity, comparing them to the original validation baseline to detect drift early [2][4]. Unlike traditional post-market surveillance, which often relies on bug reports, AI systems require active monitoring of statistical performance indicators.

The iterative nature of AI development adds another layer of complexity. Retraining introduces new risks that must be evaluated, which doesn't align with ISO 14971's linear lifecycle approach. To address this, organizations should define acceptance criteria early in the concept phase and ensure retraining data is kept separate from verification datasets to prevent overfitting [6]. These challenges highlight the need for a more integrated approach that considers both the technical and security aspects of AI risk management.

Using BS/AAMI 34971 for Additional Guidance

BS/AAMI 34971

What BS/AAMI 34971 Offers

AAMI

BS/AAMI 34971:2023 serves as a bridge between traditional risk management practices and the unique challenges posed by AI systems. It doesn’t replace ISO 14971 but complements it, demonstrating how the same risk management principles can be applied to machine learning (ML) systems. Pat Baird, Co-chair of the AAMI Artificial Intelligence Committee, puts it this way:

"The idea is that the risk management process is the same, with new failure modes specific to AI."

The standard pinpoints five critical risk areas: data management, bias, data storage/security/privacy, overtrust, and adaptive systems. For instance, it addresses "overtrust" - a scenario where clinicians might rely on AI recommendations even when those recommendations conflict with their professional judgment.

Another key takeaway from the guidance is the difference between having data and having actionable knowledge. Baird highlights:

"One of the things we noticed in the literature about ML systems is that many times failures occurred because, although the development team had data, they didn't have knowledge."

This underscores the need for clinical expertise to ensure AI technology is applied in a way that aligns with real-world medical contexts. BS/AAMI 34971 extends the traditional risk management framework to address these AI-specific challenges effectively.

Applying BS/AAMI 34971 in Practice

The framework outlined in BS/AAMI 34971 can now be put into action by manufacturers. A critical first step is ensuring that your risk management team includes professionals with clinical expertise alongside data scientists. This combination helps prevent errors caused by assumptions that may be technically sound but clinically flawed.

The standard also offers detailed guidance for tasks specific to ML systems, such as feature extraction, algorithm training, and model evaluation. In addition, the annexes include practical templates designed to manage risks associated with autonomous systems. As of May 2026, this guidance is being transitioned into an international technical specification, ISO/DTS 24971-2, which is nearing final approval.

For continuously learning AI systems, BS/AAMI 34971 introduces updated change control protocols. These protocols not only simplify regulatory submissions but also boost market confidence in the safety and reliability of AI solutions. Furthermore, they pave the way for integrating AI risk management with medical device cybersecurity measures, creating a more comprehensive approach to risk mitigation.

Combining Cybersecurity and AI Risk Management

As AI continues to play a bigger role in healthcare, integrating strong cybersecurity practices is essential to ensure patient safety. When AI-driven medical devices are responsible for delivering treatments or making diagnostic decisions, any security weakness can pose direct risks to patients.

How Cybersecurity Risks Affect Patient Safety

Cybersecurity isn't just an IT issue - it's a matter of life and death in healthcare. As IntuitionLabs puts it:

"Cybersecurity is directly a patient-safety issue. Unlike typical IT, attacks on medical devices can cause physical harm or death." [8]

The numbers are alarming. Studies reveal that 53% to 60% of connected medical devices have critical vulnerabilities, and 73% of networked IV infusion pumps contain at least one security flaw [8]. In 2021, healthcare organizations experienced an average of 830 cyberattacks per week, marking a 71% jump compared to the previous year [8]. High-profile incidents like the FDA recall of 465,000 Abbott pacemakers in 2017 - due to wireless protocol vulnerabilities that allowed hackers to drain batteries or alter pacing - highlight the real-world dangers [8]. Similarly, Medtronic's 2019 recall of certain MiniMed insulin pumps demonstrated how attackers could manipulate wireless signals to change insulin doses [9].

AI systems present additional risks that go beyond traditional cybersecurity concerns. Techniques like adversarial inputs and data poisoning can cause "silent performance degradation", where devices appear to work normally but deliver incorrect clinical results [8]. These vulnerabilities can suppress alarms, adjust therapy doses, or corrupt diagnostic data - sometimes without leaving any visible signs of tampering [9]. With stakes this high, robust cybersecurity measures are non-negotiable.

Protecting AI Systems from Security Threats

To address these risks, manufacturers must integrate cybersecurity into every phase of AI system development and management. This starts with incorporating cybersecurity into risk management frameworks and design controls from the outset [9]. Threat modeling techniques like STRIDE or LINDDUN can help identify potential attack paths and connect those threats to specific clinical risks. Aligning these efforts with the ISO 14971 framework ensures that safety remains a top priority.

Key steps to enhance security include:

  • Adversarial training to prepare systems for malicious inputs.
  • Input validation to ensure data integrity.
  • Provenance tracking for models and training datasets.
  • Using TLS for secure data transmission, strong encryption for stored data, and multi-factor authentication (MFA).
  • Maintaining a detailed Software Bill of Materials (SBOM) to monitor vulnerabilities.

For AI/ML devices, a Predetermined Change Control Plan (PCCP) allows for controlled model updates. These updates should include strict digital signatures, anti-rollback mechanisms, and runtime monitoring to detect any model drift.

Tracking vulnerabilities is equally critical. As Cenit Consulting emphasizes:

"You can't patch what you don't track." [9]

Manufacturers should use SBOM formats like SPDX or CycloneDX and conduct regular CVE scans of third-party AI frameworks and libraries. A 2024 study found 993 product vulnerabilities across 966 medical devices, many of which are exploited by advanced persistent threat groups [8]. Ensuring that all firmware and AI model updates are digitally signed - with anti-rollback features - helps prevent attackers from exploiting outdated versions.

For healthcare organizations managing numerous AI-enabled devices, platforms like Censinet RiskOps can be invaluable. These tools simplify risk assessments for medical devices, clinical applications, and supply chains. By leveraging AI-powered insights, these platforms help identify and mitigate cybersecurity vulnerabilities while ensuring human oversight remains central to critical patient safety decisions.

Creating a Complete Risk Management Framework

To effectively manage risks in AI systems, it's essential to integrate ISO 14971 with AI-specific guidance and cybersecurity standards. This doesn't mean replacing ISO 14971 - it remains the backbone - but rather layering in controls tailored to the challenges unique to AI. As Ran Chen, a global MedTech expert, puts it:

"Risk management is not a phase of product development. It is a continuous process that starts before design inputs are finalized... and continues until the device is decommissioned." [10]

This continuous approach involves incorporating Good Machine Learning Practices (GMLP), Predetermined Change Control Plans (PCCP), and leveraging ISO/TR 24971 Annex F to address cybersecurity. These controls must align with design inputs under 21 CFR 820.30 or ISO 13485. This builds on earlier discussions about the gaps in traditional risk management when applied to AI, ensuring the framework addresses every stage of the product lifecycle.

Managing Risks Across the Product Lifecycle

AI devices demand ongoing risk evaluation - not just during development but throughout their entire lifecycle, from market release to eventual retirement. During development, it's critical to define data requirements and algorithm specifications while treating trained models as distinct release items with documented traceability in the Design History File (DHF).

One of the biggest challenges comes post-deployment, when AI systems start interacting with real-world data that may differ significantly from their training inputs. This is where active post-market surveillance becomes essential. It helps detect issues like model drift, where an algorithm’s performance declines due to shifts in clinical practices or patient demographics.

Cybersecurity should be a constant focus as well. Implement robust controls such as vulnerability management, secure update protocols, and incident response plans to safeguard the AI system throughout its lifecycle.

Organizing Risk Documentation

A robust Risk Management Framework (RMF) should include documentation for the Algorithm Change Protocol (ACP) and SaMD Pre-Specifications (SPS) to manage updates effectively. Training and validation datasets should also be treated as controlled documents within your Quality Management System (QMS), with proper versioning and traceability, similar to hardware specifications.

The EU AI Act, set to take effect in August 2026, requires high-risk AI providers to maintain a documented QMS [2]. This means your documentation must demonstrate compliance not only with ISO 14971 but also with AI-specific governance standards. For organizations managing multiple AI-enabled devices, platforms like Censinet RiskOps™ can centralize risk documentation. These tools combine AI-driven insights with human oversight, routing critical risks to designated stakeholders, such as AI governance committees, for review and approval.

Modern QMS platforms with AI capabilities are particularly valuable for managing the extensive documentation required for frequent model retraining and versioning. A 2025 study of 691 FDA-cleared AI devices revealed that only 28% had documented premarket safety assessments in their public summaries [2]. Strong documentation practices go beyond meeting regulatory requirements - they demonstrate a commitment to patient safety. With thorough records in place, continuous market monitoring becomes the last critical piece of this framework.

Monitoring Risks After Market Release

Post-market risk monitoring is critical. Establish clear performance thresholds - such as sensitivity and specificity levels - that trigger Corrective and Preventive Actions (CAPA) when breached [2]. Active surveillance ensures that field data is promptly integrated into the risk management process. If real-world performance deviates from approved claims, your QMS should flag the issue and initiate an investigation immediately. Maintaining strict version control for training code and hyperparameters is another safeguard that ensures consistency and accountability.

The regulatory environment is also shifting. The FDA's 2026 QMSR update will align U.S. requirements with ISO 13485, emphasizing a risk-based approach for AI [2]. Meanwhile, the EU MDR is moving away from the "As Low As Reasonably Practicable" (ALARP) standard toward "As Far As Possible" (AFAP). This means that economic considerations cannot justify accepting a risk; risks must be minimized to the greatest extent technically feasible, without introducing new hazards [10]. This evolving landscape underscores the importance of proactive risk management and a commitment to minimizing risks wherever possible.

Conclusion

ISO 14971 continues to serve as the cornerstone of risk management for medical devices, but the rise of AI introduces new challenges that stretch beyond its initial scope. Pat Baird, Co-chair of the AAMI Artificial Intelligence Committee, puts it succinctly:

"The idea is that the risk management process is the same, and here are a few new ways that this particular technology can fail that you might not have thought about" [7].

To address these complexities, ISO 14971 must be adapted with AI-specific guidance, such as AAMI TIR34971:2023, while also integrating cybersecurity measures outlined in ISO/TR 24971 Annex F. This approach ensures that the evolving technological and regulatory landscape is adequately addressed.

A crucial evolution in risk management involves moving from static assessments to a dynamic, lifecycle-oriented approach. AI systems, heavily reliant on data, are prone to issues like model drift or shifts in clinical practices. These factors demand continuous post-market surveillance, detailed documentation, and tools like PCCP to manage updates efficiently without requiring new regulatory submissions. With the FDA’s QMSR update aligning with ISO 13485 by February 2026 and the EU AI Act set to take effect in August 2026, organizations must prepare for these shifting regulatory expectations.

Cybersecurity is another critical piece of the puzzle. Threats to data integrity, confidentiality, and availability directly impact patient safety. To mitigate these risks, organizations should treat datasets as controlled documents, implement human-in-the-loop oversight, and enforce strict version control. These measures are essential for navigating the increasingly complex intersection of AI and cybersecurity, reinforcing the need for adaptable and integrated risk management systems.

The growing recognition of AI as a risk factor underscores its importance. For example, 56% of Fortune 500 companies identified AI as a risk in 2024, a sharp rise from just 9% the year before [11]. Furthermore, 98% of risk executives report that digital transformation has already enhanced their ability to identify and mitigate risks [12]. As KPMG aptly noted:

"AI is both the accelerator and the constraint" [12].

To thrive in this rapidly changing landscape, organizations need a cohesive strategy that addresses the unique challenges posed by AI while maintaining the rigor of ISO 14971.

For healthcare organizations managing AI-powered medical devices alongside broader risks like cybersecurity and third-party dependencies, platforms such as Censinet RiskOps™ offer a centralized solution. By combining AI-driven analytics with human oversight, these tools streamline risk documentation and ensure critical issues are escalated to the right stakeholders. This approach helps maintain a comprehensive and flexible risk management framework throughout the product lifecycle.

FAQs

What AI risks does ISO 14971 miss?

ISO 14971 offers a strong framework for managing risks in medical devices, but it falls short when it comes to addressing challenges unique to AI, such as data bias, model drift, and cybersecurity vulnerabilities. These issues are especially important because AI systems are constantly evolving and often operate as "black boxes", making risk management more complex. On top of that, the framework may not align well with the rapid development cycles and ever-changing dynamics of AI/ML software used in medical devices (SaMD).

How do PCCPs manage AI model updates safely?

PCCPs help manage AI model updates securely by setting clear boundaries for changes. They define specific, limited, and verifiable updates within strict guidelines. This process includes structured versioning, governance protocols, and pre-approved adjustments, enabling updates without needing fresh submissions. These steps ensure continuous validation and maintain transparency throughout.

How should cybersecurity be built into AI device risk management?

Integrating cybersecurity into the risk management of AI-powered medical devices is essential for reducing vulnerabilities. Effective strategies include using cryptographic hashing, encryption, and immutable audit logs to protect data integrity. These measures ensure that sensitive information remains secure and unaltered.

To counter threats like data manipulation, techniques such as adversarial testing and anomaly detection play a critical role. They help identify and address potential risks before they can impact device performance or patient safety.

Additionally, adopting secure development practices, adhering to regulatory guidance, and utilizing centralized platforms for monitoring provide ongoing oversight. These approaches align with ISO 14971 principles, which focus on thorough and systematic risk management for medical devices.

Related Blog Posts

Key Points:

Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land