“HIPAA Is Not Enough: Why AI and Emerging Tech Demand a New Risk Standard”
Post Summary
HIPAA focuses on data privacy and security but does not address AI-specific risks like algorithmic bias, real-time data processing, and evolving cybersecurity threats.
Risks include algorithmic bias, data privacy breaches, cybersecurity vulnerabilities, and compliance gaps with evolving regulations.
A new standard is needed to address the unique challenges of AI and emerging technologies, including real-time monitoring, ethical AI use, and advanced cybersecurity measures.
Organizations can adopt AI-specific risk frameworks, conduct bias audits, implement real-time monitoring, and ensure compliance with updated regulatory standards.
Continuous monitoring provides real-time insights into vulnerabilities, enabling faster responses to threats and ensuring compliance with evolving standards.
Benefits include improved patient safety, reduced compliance risks, enhanced trust in AI systems, and better alignment with future regulatory requirements.
HIPAA was designed to protect healthcare data, but it’s outdated for today’s AI-driven world. AI introduces risks like re-identification of anonymized data, algorithmic bias, and increased cybersecurity vulnerabilities. Current regulations don’t address these challenges, leaving healthcare organizations exposed.
Here’s why this matters and what needs to change:
- AI Risks: AI can reverse anonymization, create bias in care, and complicate compliance due to its “black box” nature.
- Cybersecurity Gaps: Connected systems and AI expand attack surfaces, increasing the likelihood of breaches.
- Regulatory Shortcomings: HIPAA doesn’t cover AI-specific risks like algorithm accountability or fairness.
- Practical Solutions: Real-time monitoring, stronger vendor management, and ethical AI guidelines are critical.
Healthcare organizations must modernize risk management strategies to protect patient data and ensure compliance in this rapidly evolving environment.
Artificial Intelligence (AI), Health Care and HIPAA – New Compliance Challenges
AI and Technology Risks Beyond HIPAA's Scope
As AI continues to weave itself into the fabric of healthcare systems, it brings along vulnerabilities that HIPAA simply wasn't designed to address. These emerging risks highlight gaps that healthcare organizations need to tackle head-on. Let’s dive into three critical areas where AI and new technologies present unique challenges.
Data Aggregation and Re-identification Threats
One of the biggest concerns is AI's ability to re-identify data that was previously thought to be anonymous. HIPAA’s de-identification standards were sufficient in simpler times, but AI’s sophisticated pattern recognition can undo anonymization. For example, a 2019 study demonstrated that AI could re-identify almost all individuals in anonymized datasets using just 15 demographic attributes [6].
In another case, research by Na et al. revealed that an algorithm managed to re-identify 85.6% of adults and 69.8% of children in a physical activity cohort study, even after data aggregation and the removal of identifiable health information [3]. To make matters worse, a 2018 study found that data collected by ancestry companies could identify about 60% of Americans of European descent [3]. These examples show how easily AI can exploit data vulnerabilities, underscoring the inadequacy of HIPAA’s current framework for safeguarding healthcare data.
AI Black Box Problems and Audit Challenges
AI algorithms often operate as “black boxes,” meaning their decision-making processes are hidden and difficult to interpret. This lack of transparency poses serious challenges when trying to audit data flows or check for bias. Without clear visibility into how these systems handle data, it becomes nearly impossible to ensure compliance or accountability.
The risks don’t stop there. As healthcare systems become more interconnected, these opaque algorithms are part of a broader web that increases exposure to cyber threats.
Growing Attack Surfaces in Connected Healthcare
The interconnected nature of modern healthcare systems significantly broadens the attack surface. Each connection point in AI-powered and cloud-based platforms creates a new opportunity for cybercriminals. In 2024, the Department of Health and Human Services reported over 700 healthcare data breaches in the U.S., affecting more than 180 million records [5].
Adding to the complexity, 93% of IT leaders plan to integrate AI agents into their systems within the next two years, further increasing potential vulnerabilities [5]. AI also empowers hackers to craft more convincing phishing emails and discover system weaknesses faster, leaving less time to deploy critical patches [7]. A striking example is the 2021 ransomware attack on Ireland’s Health Service Executive, which caused a nationwide shutdown of hospital IT systems [6].
These expanding risks make it clear that modern, adaptive protections are essential. As Linda Malek, Partner and Chair of the Healthcare and Privacy & Cybersecurity practice groups at Moses & Singer, put it:
"There is still a gap in terms of how security and privacy should be regulated in this area. There is a patchwork of laws that we apply, but they were not designed for AI. Neither is HIPAA." [4]
Where Current Compliance Frameworks Miss the Mark
HIPAA provides essential protections for Protected Health Information (PHI), but it falls short when it comes to addressing the unique risks posed by AI. As AI continues to evolve, its reliance on transparency and accountability introduces challenges that traditional frameworks weren't designed to handle. These gaps create vulnerabilities in risk strategies for AI, as discussed below.
Privacy Focus vs. Algorithm Accountability
HIPAA’s "minimum necessary" rule is designed to limit the use of data, but AI thrives on large datasets to function effectively. This creates a conflict, as larger datasets increase the risk of re-identification.
A legal expert from Foley & Lardner LLP explained:
"AI is a powerful enabler in digital health, but it amplifies privacy challenges. By aligning AI practices with HIPAA, conducting vigilant oversight, and anticipating regulatory developments, Privacy Officers can safeguard sensitive information and promote compliance and innovation in the next era of digital health." [1]
This tension between privacy safeguards and AI's data needs highlights the challenge of applying static rules to rapidly changing technology.
Static Rules for Dynamic AI Systems
Traditional compliance frameworks rely on fixed processes and stable systems, but AI systems are anything but static. They learn, adapt, and can even experience performance drift, especially when used across diverse patient populations. These dynamic characteristics make it difficult for static frameworks to keep up, leading to strained resources and rising regulatory costs for compliance teams [8].
Some regulatory bodies, like the FDA, are beginning to explore adaptive approaches. For instance, predetermined change control plans allow for iterative improvements in AI without requiring new approvals for every modification [9]. Dave Rowe, Executive Vice President at Intellias, noted:
"Managing healthcare compliance is a continuous investment of time and talent, complicated further by ever-changing regulations, internal systems and technology. Keeping up with these two moving targets requires incredible focus and resources. However, when AI is integrated into the process, it enables real-time regulatory radar for team members. This allows teams to stay current with regulations and confidently adapt to the constantly evolving landscape." [8]
Missing Protections Against AI Bias and Safety Issues
Current frameworks also fail to address the ethical challenges AI presents, particularly around bias and safety. HIPAA doesn’t provide guidance on these issues, leaving gaps in ensuring equitable and transparent care. For example, AI-driven diagnostics have shown racial and gender biases, such as underdiagnosing liver disease in women or underestimating the needs of Black patients [10]. Without mandated audits for bias, fairness assessments, or requirements for diverse training datasets, these disparities risk becoming even more entrenched.
Mark Coeckelbergh, AI Ethics Global Lead, emphasized this concern:
"AI is a powerful tool that has the potential to revolutionize healthcare, but its use must be governed by ethical principles to ensure patient safety, fairness, and transparency." [10]
With nearly 75% of healthcare and life sciences organizations planning to use AI for compliance, data analysis, and risk assessments, there’s an urgent need to update risk management standards to meet these modern demands [8].
sbb-itb-535baee
Practical Solutions for Modern Risk Management
Addressing compliance gaps in healthcare requires moving beyond traditional frameworks like HIPAA. To tackle the fast-paced challenges of modern technology, healthcare organizations need tools and strategies that focus on real-time monitoring and adaptive risk management. Specialized AI platforms are becoming essential to keep up with the ever-evolving landscape of technology threats.
Real-Time Risk Monitoring and Adaptive Controls
Traditional risk assessments, which often happen quarterly or annually, are no match for the speed at which AI systems evolve. Real-time risk monitoring offers a continuous and predictive approach to identifying potential compliance issues across healthcare systems [11]. This strategy includes advanced techniques like anomaly detection, predictive risk modeling, automated risk scoring, and tracking provider behavior - all integrated with systems like EHR and billing platforms [11].
Take, for instance, an AI diagnostics tool that improved data classification and decision-making accuracy. Not only did it safeguard patient information, but it also ensured AI-generated data was correctly categorized before being used. This led to a 5% shift in treatment decisions, enabling more precise diagnoses and better outcomes [12][13].
To implement effective real-time monitoring, healthcare organizations should:
- Clearly define compliance goals, such as ensuring billing accuracy, securing patient data, and maintaining documentation integrity.
- Select AI tools that integrate seamlessly with existing systems, offer robust analytics, and support customization.
- Set up appropriate alerts and regularly validate AI performance against real-world outcomes [11].
In addition to monitoring, addressing risks from third-party vendors is equally critical in managing cybersecurity threats.
Better Vendor and Third-Party Risk Management
Managing third-party risks has become more efficient with automated workflows and collaborative networks. Healthcare organizations often work with numerous vendors, each posing potential security vulnerabilities.
One example is Censinet RiskOps, a cloud-based risk platform that allows healthcare providers and vendors to securely share cybersecurity and risk data [14][15]. This platform addresses risks tied to vendors, patient data, medical devices, and supply chain vulnerabilities. With a network that includes over 50,000 vendors and products, Censinet RiskOps facilitates the exchange of valuable insights on vendor security practices and emerging threats [14].
Using Censinet RiskOps™ and Censinet AITM for AI Risk Management
Specialized platforms like Censinet RiskOps™ and Censinet AITM streamline vendor assessments and improve risk management efficiency. Censinet RiskOps™ simplifies risk evaluations throughout a vendor's lifecycle with features like its Digital Risk Catalog™ and Delta-Based Reassessments, significantly reducing the time needed for vendor reviews [16].
Censinet AITM takes this a step further by using AI to speed up processes like completing security questionnaires - turning tasks that once took weeks into seconds. The platform automatically summarizes vendor documentation, identifies key product integration details, and highlights fourth-party risk exposures. It also generates detailed risk reports while keeping a human-in-the-loop approach, ensuring oversight through configurable rules and review processes.
For AI governance, Censinet AITM acts as a centralized control system. Its intuitive risk dashboard aggregates real-time data, serving as a hub for managing AI-related policies, risks, and tasks. By automatically routing assessment findings to the right stakeholders, including AI governance committees, the platform ensures timely action and accountability. With over 100 provider and payer facilities already participating in the Censinet Risk Network [16], healthcare organizations can tap into shared intelligence to better manage AI-related risks and streamline governance processes.
Creating New Standards for AI Risk Management
Healthcare organizations must look beyond HIPAA's rigid framework and establish standards that can keep up with the ever-changing nature of AI systems. Managing AI risks effectively requires a shift in approach, focusing on three key areas: adopting continuous monitoring, setting clear ethical guidelines, and staying in tune with evolving federal regulations.
Continuous Risk Management Over Periodic Checks
Traditional compliance methods, like quarterly or annual assessments, simply can't keep up with AI systems that evolve at a rapid pace. A continuous risk management approach helps organizations stay ahead by identifying and addressing risks throughout the AI lifecycle. Scott Madenburg compares this dynamic approach to a GPS, offering real-time updates to help manage risks as conditions shift.
The FUTURE-AI Consortium, a group of 117 experts from 50 countries founded in 2021, has laid out best practices for implementing continuous AI risk management [17]. These include using AI logging to track user actions, data access, predictions, and clinical decisions in real time, creating an instant audit trail. Monitoring systems should also include periodic audits and updates to address issues like data drift or performance drops. Automated tools that provide uncertainty estimates can further guide users in making informed decisions [17]. With only 24% of generative AI projects currently secured and the average cost of a data breach projected to hit $4.88 million in 2024 [19], waiting months for risk assessments is simply not an option. Continuous monitoring enables organizations to respond quickly and lays the groundwork for ethical AI practices.
Clear AI Decision-Making and Ethical Guidelines
Transparency and explainability are essential for building trust in healthcare AI. To achieve this, organizations should embed principles like fairness, traceability, robustness, and explainability into every stage of AI development and deployment [17]. Klaus Moosmayer, Chief Ethics, Risk and Compliance Officer at Novartis International, emphasizes:
"Any approach to AI must be grounded in a set of core principles that guide the different stages of AI development and deployment, driving innovations to be cutting-edge and ethically sound."
Bias detection should start during the design phase, using reliable methods to identify and address potential issues. Many states are already requiring transparency in AI applications. For instance, healthcare providers using AI to generate patient communications must disclose its use and provide clear instructions for reaching a human provider [21]. Critical decisions should not rely solely on automated systems; human oversight is essential. Organizations should conduct impact assessments, allow individuals to appeal AI-driven decisions, and ensure that AI use is disclosed [22].
Staying Current with Federal AI Guidance
In addition to continuous monitoring and ethical guidelines, healthcare organizations need to stay aligned with evolving federal AI standards. The regulatory environment is changing quickly. In 2024, 45 states proposed AI-related bills, and 31 states and territories enacted AI laws or resolutions [18]. Globally, the EU AI Act, adopted in mid-2024, established unified rules for AI across 27 EU member states, focusing on areas like risk management, data governance, human oversight, transparency, and cybersecurity [20]. Meanwhile, the U.K. is leaning toward sector-specific guidance, and China continues to expand its regulatory framework under tight government oversight [18].
Healthcare organizations can look to frameworks like Singapore's AI in Healthcare Guidelines for practical advice [20]. To stay compliant, they should introduce AI-specific policies, protocols, and training programs that educate staff on both the benefits and risks of AI. This includes submitting algorithms and datasets for certification to ensure they align with clinical standards and avoid discriminatory outcomes [22]. Dedicated teams should be tasked with monitoring regulatory changes and translating them into actionable practices, ensuring AI risk management remains effective and up to date.
Next Steps for Healthcare Organizations
HIPAA alone isn’t enough to shield healthcare organizations from the intricate risks posed by AI and other emerging technologies. With only 9% of companies feeling fully prepared to incorporate AI into their risk management strategies [24], it’s clear that action is urgently needed. Healthcare organizations must take deliberate steps to modernize their risk management approaches and build frameworks that can handle the unique challenges AI presents.
The first step is to move beyond basic compliance checklists and adopt a more advanced Enterprise Risk Management (ERM) framework tailored to AI-related risks [25]. A modernized ERM strategy allows organizations to identify potential threats early and make informed decisions to minimize them - or even turn them into opportunities [24]. According to McKinsey, AI could save the U.S. healthcare system up to $360 billion annually by improving efficiency and reducing adverse outcomes [25]. However, achieving these benefits depends on having a solid risk management foundation in place.
Next, it’s critical to evaluate your AI portfolio. Start by creating a detailed inventory of all AI systems currently in use across your organization. Regularly assess these systems for risks using tools like HITRUST's AI Risk Management Assessment and NIST's AI RMF [2]. This process can uncover vulnerabilities, which is especially important given that only 16% of health systems have a systemwide governance policy addressing AI usage and data access [25].
Training is another essential step. All stakeholders - from clinicians to administrative staff - need ongoing education on how to effectively and safely interact with AI tools. Training should also cover cybersecurity risks and mitigation strategies, drawing on resources from organizations like HITRUST [2]. This ensures that everyone understands both the advantages and limitations of AI.
Organizations must also enforce strict vendor criteria. This includes requiring audit trails, conducting performance testing, and maintaining up-to-date compliance oversight [25].
Additionally, having contingency plans for AI disruptions is non-negotiable. Develop protocols that address data access, alternative workflows, and communication strategies for when AI systems go offline or are compromised. These plans should include clear procedures for identifying, reporting, and resolving AI-related cybersecurity incidents, with collaboration across IT, compliance, legal, and clinical teams [2].
Integrating AI into cybersecurity strategies is another priority. Healthcare organizations that use AI in their cybersecurity programs report faster breach detection and containment - cutting times by 21–31% on average [23]. They’ve also seen lower data breach costs, with savings ranging from $800,000 to $1.77 million [23]. Leveraging AI to monitor system inventories, analyze vulnerability databases, and assess cybersecurity risks consistently can further strengthen defenses.
Finally, validate your security controls through external audits and use incident reporting software to track and analyze AI-related issues [2][25].
As the regulatory landscape for AI evolves - particularly with new guidelines from the HHS AI Task Force expected by 2025 - healthcare organizations must stay informed and adjust their risk management strategies accordingly. Updating these frameworks now not only bridges the gaps left by HIPAA but also positions organizations to fully harness AI’s potential while safeguarding patient safety and reinforcing cybersecurity in an increasingly complex environment.
FAQs
Why doesn’t HIPAA fully address the risks of AI in healthcare?
HIPAA struggles to keep up with the challenges posed by AI, largely because it was created with traditional healthcare practices in mind - not the intricate demands of modern technology. It doesn't effectively deal with AI-specific risks, such as those linked to machine learning algorithms or the ways AI handles and interprets sensitive patient information.
Another gap lies in HIPAA's limited guidance on data sharing in cloud-based environments and the distinct security concerns tied to AI-powered systems. As healthcare continues to adopt AI solutions, there's a growing need for updated frameworks that address these new risks while safeguarding both patient privacy and system security.
What are the biggest data privacy and cybersecurity risks AI creates for healthcare systems?
AI introduces distinct challenges to healthcare data privacy and cybersecurity, particularly when it comes to data breaches, unauthorized access, and system vulnerabilities. The root of these problems lies in the nature of AI systems, which often depend on processing vast amounts of sensitive patient information and utilizing complex algorithms. Without proper safeguards, these systems can become prime targets for exploitation.
For instance, if an AI model isn't secured adequately, it could inadvertently expose patient data during processing or open the door to cyberattacks. On top of that, AI systems often rely on continuous learning, which can introduce new vulnerabilities over time. This means they require constant monitoring and updates to ensure data remains secure.
To tackle these challenges, healthcare organizations need to move beyond traditional compliance measures like HIPAA. They must adopt forward-thinking risk management strategies and implement strong cybersecurity defenses to keep patient data safe in this rapidly evolving environment.
What strategies can healthcare organizations use to manage AI-related risks effectively?
Healthcare organizations can tackle AI-related risks by implementing a well-rounded risk management framework that brings together legal, compliance, and IT teams. Key measures include conducting regular security audits, using strong data encryption methods, and enforcing strict access controls to protect sensitive information. Beyond meeting regulatory standards like HIPAA, it's important to address unique challenges posed by AI technologies.
Spotting vulnerabilities early is crucial. This can be done through simulated attack tests and continuous system monitoring. Furthermore, embracing ethical AI practices and maintaining transparency in how AI systems make decisions can strengthen trust and minimize risks. These proactive steps enable healthcare providers to better handle the complexities of AI in today’s rapidly changing tech environment.
Related posts
- Cross-Jurisdictional AI Governance: Creating Unified Approaches in a Fragmented Regulatory Landscape
- The AI-Augmented Risk Assessor: How Technology is Redefining Professional Roles in 2025
- AI or Die: Why Healthcare Must Manage AI Risk Before It’s Too Late”
- “What’s Lurking in Your Algorithms? AI Risk Assessment for Healthcare CIOs”
Key Points:
Why is HIPAA insufficient for managing AI and emerging tech risks?
- Definition: HIPAA (Health Insurance Portability and Accountability Act) focuses on protecting patient data privacy and security but does not address the unique risks posed by AI and emerging technologies.
- Limitations: HIPAA lacks provisions for managing algorithmic bias, real-time data processing, and advanced cybersecurity threats, leaving healthcare organizations vulnerable to new risks.
- Impact: As AI adoption grows, relying solely on HIPAA creates compliance gaps and increases the likelihood of data breaches, biased care, and regulatory penalties.
What are the key risks of AI and emerging technologies in healthcare?
- Algorithmic Bias: AI models trained on unrepresentative datasets can perpetuate disparities in care, leading to unequal treatment for underrepresented populations.
- Data Privacy Breaches: Emerging technologies process vast amounts of sensitive patient data, increasing the risk of breaches.
- Cybersecurity Vulnerabilities: IoMT (Internet of Medical Things) devices and AI systems are prime targets for cyberattacks, including ransomware and data poisoning.
- Compliance Gaps: Current regulations like HIPAA do not fully address the complexities of AI and emerging tech, creating legal and ethical challenges.
Why do healthcare organizations need a new risk standard?
- AI-Specific Challenges: A new standard is required to address risks like algorithmic bias, ethical AI use, and real-time data processing.
- Evolving Threat Landscape: Cybersecurity threats are becoming more sophisticated, requiring advanced measures beyond HIPAA’s scope.
- Regulatory Alignment: A new standard ensures compliance with emerging regulations like the EU AI Act and updated FDA guidelines.
- Proactive Risk Management: A comprehensive standard enables organizations to anticipate and mitigate risks before they escalate.
How can healthcare organizations manage AI and emerging tech risks?
- Adopt AI-Specific Frameworks: Use frameworks like NIST’s AI Risk Management Framework to guide risk assessments and mitigation strategies.
- Conduct Bias Audits: Regularly evaluate AI models for bias and ensure diverse, representative training datasets.
- Implement Real-Time Monitoring: Use continuous monitoring tools to detect vulnerabilities and respond to threats immediately.
- Ensure Ethical AI Use: Establish governance committees to oversee AI projects and ensure alignment with ethical principles.
- Update Compliance Practices: Align risk management efforts with updated regulatory standards to avoid penalties and ensure patient trust.
What role does continuous monitoring play in managing emerging tech risks?
- Real-Time Detection: Continuous monitoring identifies risks as they arise, enabling faster responses to threats.
- Improved Visibility: Provides a comprehensive view of an organization’s risk landscape, reducing blind spots.
- Regulatory Compliance: Ensures that organizations remain compliant with evolving standards by tracking changes in real-time.
- Proactive Risk Management: Allows organizations to address risks before they escalate, minimizing disruptions to patient care.
What are the benefits of adopting a new risk standard for AI and emerging tech?
- Improved Patient Safety: Reduces the likelihood of incidents caused by biased AI models or cybersecurity breaches.
- Enhanced Trust: Demonstrates a commitment to ethical AI use and robust risk management, building confidence among patients and stakeholders.
- Reduced Compliance Risks: Aligns with future regulatory requirements, minimizing the risk of penalties.
- Operational Efficiency: Automates risk assessments and compliance tracking, freeing up resources for strategic initiatives.
- Future-Proofing: Prepares organizations for the rapid pace of technological advancements and regulatory changes.