X Close Search

How can we assist?

Demo Request

How to Manage AI Risk in Healthcare Security

Explore how healthcare leaders manage AI risks, enhance security, and balance innovation with patient safety.

Post Summary

Why is AI risk management critical in healthcare security?

AI introduces unique risks like data breaches, algorithmic bias, and operational disruptions, making robust risk management essential to protect patient safety and data.

What are the key risks of AI in healthcare?

Risks include cybersecurity vulnerabilities, data privacy concerns, algorithmic bias, and overreliance on AI systems.

How does AI improve healthcare cybersecurity?

AI enhances cybersecurity by automating threat detection, monitoring IoMT devices, and predicting vulnerabilities to enable proactive risk mitigation.

What frameworks support AI risk management in healthcare?

Frameworks like NIST Cybersecurity Framework 2.0 and the NIST AI Risk Management Framework provide structured guidance for managing AI risks.

What role does governance play in AI risk management?

Governance ensures ethical AI use, regulatory compliance, and accountability, fostering trust and transparency in healthcare systems.

How can healthcare organizations implement AI responsibly?

Organizations can adopt ethical AI practices, train staff, ensure transparency, and integrate AI into workflows with robust risk management strategies.

The rapid acceleration of artificial intelligence (AI) adoption in healthcare offers transformative opportunities, from improving diagnostics to streamlining administrative processes. Yet, its unmatched potential comes with significant risks, especially in the areas of security, governance, and operational integrity. With the stakes as high as patient safety, healthcare leaders must balance innovation with robust risk management.

This article captures a compelling discussion between Willie Leer, Chief Marketing Officer of Point Guard AI, and Manmahan Singh, Deputy Chief Information Security Officer (CISO) at UT Southwestern Medical Center, during a podcast recorded at the Black Hat security conference. These industry experts explore the intersection of AI, cybersecurity, and healthcare, offering actionable insights for decision-makers navigating this complex landscape.

The Dual Promise and Perils of AI in Healthcare

AI’s applications in healthcare are as varied as they are impactful. From enhancing patient diagnostics to streamlining hospital admissions and revenue management, the possibilities appear boundless. However, as Willie Leer aptly puts it, "AI has descended on us so quickly, it’s unprecedented", creating a governance gap that organizations must urgently address.

Manmahan Singh emphasizes that healthcare institutions must adopt a curated approach to AI. Instead of chasing trends, organizations should identify pain points - such as inefficiencies in operations - and evaluate how AI solutions can enhance patient safety and outcomes. Singh highlights, "It’s always a patient-first approach. Productivity is important, but ensuring patient safety and adding value to the care process is paramount."

Challenges AI Introduces

The rise of AI in healthcare brings with it several inherent risks:

  • Security vulnerabilities: Expanding attack surfaces and the potential for data poisoning or model manipulation.
  • Governance gaps: AI’s rapid adoption has outpaced the development of oversight frameworks.
  • Regulatory ambiguity: While existing laws like HIPAA provide a foundation, they are insufficient for AI-specific risks, particularly in handling model-generated decisions.
  • Operational risks: Shadow AI use, where researchers or employees implement unapproved AI models, increases the likelihood of security breaches.

Key Considerations for AI Risk Management

1. Value vs. Vulnerability Assessment

Manmahan Singh stresses that every AI initiative should begin with a thorough evaluation of value versus risk. This includes:

  • Identifying how the technology will enhance patient safety.
  • Assessing vulnerabilities tied to data usage and sharing.
  • Establishing clear criteria for acceptable operational risks.

The goal is to ensure AI adoption is purposeful and aligned with organizational and patient priorities.

2. Creating Robust Governance Frameworks

A recurring theme in the conversation is the need for formal governance structures. Singh highlights UT Southwestern’s AI risk oversight committee, which brings together stakeholders from healthcare leadership, research, compliance, and legal departments. This multidisciplinary approach enables collaborative decision-making that prioritizes security, compliance, and innovation.

The governance process should include:

  • Early-stage security and compliance reviews (shift-left security).
  • Detailed evaluations of AI models’ data handling, access permissions, and operational behavior.
  • Ongoing monitoring to detect and address issues such as model drift or unauthorized data access.

Willie Leer echoes this sentiment, emphasizing that good governance should not stifle innovation. Instead, it should provide guardrails, allowing organizations to experiment safely.

3. Tackling Shadow AI

Shadow AI - unauthorized AI implementations - poses significant risks, particularly in academic medical centers where researchers often experiment with open-source AI models. Singh explains that his institution has implemented an education-first approach, offering support and guidance to encourage secure practices without stifling academic progress.

Willie Leer adds that discovery tools can reveal the extent of AI activity within an organization, enabling leaders to take necessary control measures. "You can’t tell people not to use AI", Leer says, "but you can ensure it’s done securely and within proper governance frameworks."

4. Addressing Emerging Threats

AI brings new risk vectors to healthcare, including:

  • Data poisoning: A malicious actor manipulates training data, leading the AI model to produce incorrect results.
  • Autonomous agent vulnerabilities: AI agents with insufficient access controls may inadvertently expose sensitive data.
  • Prompt injections: Hackers craft queries to manipulate AI tools, potentially accessing sensitive patient information.

Both experts agree that organizations must develop AI-specific security practices, such as red-teaming AI models to identify vulnerabilities, implementing zero-trust principles, and continuously monitoring for anomalies.

Regulatory Factors in AI Risk Management

While existing frameworks like HIPAA provide a baseline for securing patient data, they fail to address the intricacies of AI-powered systems. Singh notes that state-level regulations, such as Texas House Bill 127, and federal executive orders are beginning to address AI usage and governance, but the regulatory landscape remains fragmented.

In the absence of comprehensive AI-focused regulations, organizations are turning to frameworks like the NIST AI Risk Management Framework (AI RMF) and the OWASP Top 10 LLM Risks to guide their AI security strategies.

Key Takeaways

  • Adopt a patient-first approach: Evaluate how AI projects will improve patient safety and outcomes before pursuing them.
  • Establish governance early: Create cross-functional committees to ensure AI adoption aligns with security and compliance requirements.
  • Combat shadow AI: Use discovery tools to identify unauthorized AI use and engage with users to improve adherence to security policies.
  • Secure AI models proactively: Conduct red-team testing, enforce zero-trust architecture, and monitor for data drift or anomalies.
  • Leverage regulatory frameworks: Use tools like NIST’s AI RMF to build a structured, compliant approach to AI risk management.
  • Prioritize education: Train employees on the risks of improper AI use and the importance of adhering to governance standards.
  • Avoid new security silos: Integrate AI risk management into existing security processes to avoid fragmentation.

Conclusion

The integration of AI into healthcare is inevitable and necessary to meet the growing demands of the industry. However, its adoption comes with complex risks that require a thoughtful and collaborative approach to manage effectively. From establishing governance frameworks to securing models against threats, healthcare organizations must prioritize patient safety and operational integrity while embracing the transformative potential of AI.

As Willie Leer aptly summarizes, "We can’t stifle innovation - we must create safe environments to experiment while maintaining governance." By following these insights, healthcare leaders can navigate the challenges of AI adoption, ensuring it serves its ultimate purpose: improving patient care.

Source: "Securing Healthcare in the Digital Age | CISO Podcast Series (Episode 3)" - The Cyber Express, YouTube, Aug 28, 2025 - https://www.youtube.com/watch?v=tZD9dNVnTQQ

Use: Embedded for reference. Brief quotes used for commentary/review.

Related Blog Posts

Key Points:

Why is AI risk management critical in healthcare security?

AI risk management is essential in healthcare because AI introduces unique challenges that traditional security measures cannot fully address. These include:

  • Data breaches: AI systems process vast amounts of sensitive patient data, increasing the risk of exposure.
  • Algorithmic bias: Poorly trained AI models can reinforce inequities, leading to unfair or harmful outcomes.
  • Operational disruptions: Overreliance on AI can lead to workflow interruptions if systems fail.
  • Patient safety risks: Errors in AI-driven diagnostics or treatment recommendations can endanger lives.

What are the key risks of AI in healthcare?

AI in healthcare presents several risks, including:

  • Cybersecurity vulnerabilities: Expanded attack surfaces due to interconnected systems and IoMT devices.
  • Data privacy concerns: Mismanagement of sensitive health information can lead to regulatory violations.
  • Bias in algorithms: Training data that lacks diversity can result in unfair treatment recommendations.
  • Overreliance on AI: Clinicians may defer too much judgment to AI systems, potentially overlooking critical nuances.

How does AI improve healthcare cybersecurity?

AI strengthens healthcare cybersecurity by:

  • Automating threat detection: AI monitors networks 24/7, identifying anomalies and potential breaches in real time.
  • Monitoring IoMT devices: AI tracks the behavior of connected medical devices to detect irregular activity.
  • Predicting vulnerabilities: AI uses predictive analytics to identify and address risks before they escalate.
  • Streamlining compliance: AI automates risk assessments and regulatory reporting, reducing administrative burdens.

What frameworks support AI risk management in healthcare?

Several frameworks guide AI risk management in healthcare, including:

  • NIST Cybersecurity Framework 2.0: Focuses on identifying, protecting, detecting, responding to, and recovering from cyber threats.
  • NIST AI Risk Management Framework: Addresses algorithmic bias, transparency, and accountability in AI systems.
  • HHS Cybersecurity Performance Goals: Emphasizes automation, continuous monitoring, and secure system design.

What role does governance play in AI risk management?

Governance is critical for ensuring responsible AI use in healthcare. It involves:

  • Ethical oversight: Establishing policies to prevent misuse and bias in AI systems.
  • Regulatory compliance: Aligning AI operations with HIPAA, GDPR, and other standards.
  • Accountability: Defining roles and responsibilities for AI oversight across teams.
  • Transparency: Ensuring that AI systems provide explainable outputs to build trust among clinicians and patients.

How can healthcare organizations implement AI responsibly?

To implement AI responsibly, healthcare organizations should:

  • Adopt ethical AI practices: Train algorithms on diverse datasets and validate outputs regularly.
  • Train staff: Educate clinicians and administrators on AI’s capabilities, limitations, and risks.
  • Ensure transparency: Use explainable AI (XAI) to clarify how systems make decisions.
  • Integrate AI into workflows: Pilot AI tools in small-scale projects to identify potential disruptions before full deployment.
  • Develop contingency plans: Prepare for AI system failures with backup protocols and alternative workflows.
Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land