X Close Search

How can we assist?

Demo Request

“From Promise to Peril: How Healthcare Leaders Can Govern AI Responsibly”

Explore the balance between AI innovation and the critical need for responsible governance in healthcare to mitigate risks and enhance patient safety.

Post Summary

Artificial intelligence (AI) is transforming healthcare, offering faster diagnoses, reduced administrative burdens, and improved patient care. However, it also introduces risks like cybersecurity threats, data privacy concerns, and algorithmic bias. While AI tools have shown promise - like identifying missed epilepsy lesions or reducing administrative tasks by 69.5% - trust remains low, with only 48% of patients believing AI will improve outcomes. Cyberattacks, such as ransomware, cost healthcare organizations millions, and weak governance leaves systems vulnerable.

To address these challenges, healthcare leaders must prioritize AI governance by:

  • Establishing oversight committees.
  • Monitoring AI usage and risks.
  • Ensuring compliance with regulations like HIPAA.
  • Training staff and securing systems.

Strong governance frameworks not only mitigate risks but also ensure AI is used responsibly to benefit patients and providers alike.

AI, Cybersecurity, and the High-Stakes Risks in Healthcare | A HIMSS 2025 Conversation with Lee Kim

AI Cybersecurity Risks in Healthcare

Healthcare organizations are navigating a storm of cybersecurity challenges as they embrace AI. With high-value patient data, outdated systems, and limited security resources, the sector faces unique vulnerabilities. To reap the benefits of AI without jeopardizing safety, healthcare leaders need a clear understanding of these risks.

Cybersecurity Threats from AI Systems

AI can be a double-edged sword in cybersecurity - it helps defend against threats but also enables more sophisticated attacks. For example, AI-powered malware uses machine learning to evade older detection systems. This is particularly concerning for healthcare networks, which often rely on outdated security infrastructure [2].

The numbers paint a grim picture. Healthcare organizations experience an average of 1,463 cyberattacks per week, and ransomware incidents surged by 278% between 2018 and 2023 [3]. When these attacks succeed, the costs are staggering. On average, healthcare organizations lose $900,000 per day due to operational disruptions caused by ransomware [3].

AI is also making cyberattacks more deceptive. Criminals now use AI to craft realistic phishing emails and deepfakes, targeting healthcare staff. These attacks exploit human vulnerabilities, which remain one of the weakest links in cybersecurity. According to Microsoft, "Email remains one of the largest vectors for delivering malware and phishing attacks for ransomware attacks, especially within the Healthcare industry" [3].

Moreover, the rise of Ransomware as a Service (RaaS) has lowered the technical barriers for cybercriminals. Combined with a 1,300% increase in business email compromise scams since 2015, this trend shows how AI is democratizing cybercrime [3].

Healthcare's interconnected systems, such as IoT devices, add another layer of risk. Many IoT devices lack proper management and protection, making them easy targets. Once compromised, attackers can move through the network to access critical AI systems and sensitive patient data [4].

Recent incidents highlight the real-world impact of these threats. In February 2024, Change Healthcare suffered a ransomware attack that affected 190 million individuals. Despite paying a $22 million ransom, the organization couldn't guarantee the return of stolen data [3]. Similarly, Ascension Healthcare faced a ransomware attack in May 2024 that disrupted operations across 140 hospitals and exposed data from 5.6 million patients [3].

These evolving threats not only compromise data but also pose risks to patient safety.

How AI Risks Affect Patient Safety

AI vulnerabilities in healthcare don't just threaten data - they can directly harm patients. When AI systems are compromised, the consequences often disrupt care and clinical decision-making.

The financial toll of breaches is massive. Healthcare data breaches now cost an average of $10.93 million, with phishing-related incidents costing $9.77 million per breach in 2024 [4][5]. In that same year, 14 data breaches affected over 1 million healthcare records each, impacting 237,986,282 U.S. residents [5]. Hacking and IT incidents accounted for 74% of breaches, affecting 90% of those impacted [8].

The WannaCry ransomware attack in 2017 serves as a stark reminder of how cybersecurity failures can disrupt patient care. The attack crippled the UK's National Health Service, forcing ambulances to be diverted and surgeries to be canceled. This incident underscored how quickly ransomware can jeopardize patient safety.

When AI systems used for clinical decisions are compromised, the stakes become even higher. Corrupted algorithms could lead to misdiagnoses, incorrect treatments, or delays in care. For instance, the OneBlood ransomware attack in July 2024 temporarily shut down internal systems at a blood center that supplies hundreds of hospitals in the southeastern U.S., directly affecting the region's blood supply [3].

AI Governance Gaps in Healthcare

As AI adoption accelerates, the lack of adequate oversight amplifies the risks. Bridging these governance gaps is critical to ensuring AI can be used safely and securely. Unfortunately, many healthcare organizations are falling short.

A significant number of organizations lack approval processes for AI technologies. Nearly half (47%) of healthcare organizations don't have any approval processes for AI, and 42% lack processes specific to AI technologies [10]. Even more concerning, only 1% have taken steps to develop AI policies or implement safeguards [10].

This governance shortfall is evident in other areas as well. While 81% of healthcare organizations use AI, only 31% actively monitor its usage [11]. Without proper monitoring, organizations leave themselves vulnerable to misuse and attacks. Additionally, less than half have established acceptable use policies for AI [11].

Attempts to implement governance frameworks often fall short. For example, healthcare organizations score poorly on the NIST AI Risk Management Framework, with an average coverage of just 31%. Measurement and management functions are particularly weak, scoring 24% and 28%, respectively [13].

"Technology alone is not enough; robust governance is essential. Without clear policies and oversight, AI introduces new risks such as insider threats, third-party vulnerabilities, and data breaches."

  • Lee Kim, HIMSS Senior Principal of Cybersecurity and Privacy [12]

Kim also emphasizes that "Money supports security, but without governance, AI-related risks remain unchecked" [10].

The urgency of addressing these gaps is clear. A 2023 survey found that only 16% of hospitals have system-wide AI governance policies [7]. Meanwhile, the market for AI in healthcare is valued at $10.4 billion, with adoption expected to grow by 38.4% by 2030 [6].

Without proper governance, the risks cascade. New AI tools may be deployed without assessing their security implications. Monitoring gaps make it harder to detect misuse or breaches. And without clear policies, staff may inadvertently create vulnerabilities through improper AI use.

The American Medical Association has stressed that cybersecurity is not just a technical issue. It requires collaboration among health IT professionals, healthcare systems, and government stakeholders to protect patient information [9].

Healthcare leaders must see AI governance as more than compliance. It's about building a secure foundation that protects patients and organizations from the ever-growing array of cyber threats.

AI Governance Strategies for Healthcare Leaders

With 65% of global businesses now using AI but only 25% implementing strong governance, healthcare leaders face both risks and opportunities. Addressing these gaps could unlock $360 billion in potential annual savings [19]. To govern AI responsibly, healthcare leaders need actionable strategies that tackle risks while promoting safe and effective innovation.

Building AI Governance Structures and Policies

Creating effective AI governance starts with establishing structures that balance innovation with patient safety, regulatory compliance, and accountability.

An essential first step is forming an AI Governance Committee. This committee should bring together a diverse group of stakeholders, including healthcare providers, AI specialists, ethicists, legal experts, patient advocates, and data scientists [14]. Such diversity ensures that decisions reflect clinical, ethical, and legal perspectives. As Stephen Kaufman, Chief Architect at Microsoft, aptly stated:

"AI governance is critical and should never be just a regulatory requirement" [18]

The committee’s role extends beyond initial oversight. It must continuously evaluate AI systems to ensure they meet safety, efficacy, and ethical standards. For example, the sepsis predictor tool from Epic failed to identify 67% of cases at the University of Michigan, highlighting the need for ongoing review [19].

Healthcare organizations also need robust AI policies. These should address key concerns like data privacy, algorithmic bias, transparency, and unintended consequences that could jeopardize patient safety [1]. Vendor selection is another critical area. Leaders should demand transparency, audit trails, and performance testing from AI vendors to ensure accountability [19].

Additionally, staff training is indispensable. Teams must be educated on safely using AI tools, recognizing their limitations, and reporting anomalies. Training programs should be tailored to specific roles and risk levels. Since human error remains a significant cybersecurity vulnerability, well-designed training is a vital line of defense.

Once governance structures are in place, the next step is to thoroughly assess AI-related risks.

AI Risk Assessment Models

AI’s evolving nature demands risk assessment models that go beyond traditional methods. Unlike static software, AI systems adapt and learn over time, introducing unique risks.

Risk analysis should focus on AI’s dynamic data flows, evolving training processes, and multiple access points [21]. To address these challenges, the NIST Artificial Intelligence Risk Management Framework (AI RMF) provides a structured approach with four components: Govern, Map, Measure, and Manage [17]. Released in March 2023, this framework prioritizes trust, security, and privacy throughout an AI system’s lifecycle [24].

Other tools, like FMECA (Failure Mode, Effects, and Criticality Analysis), can help healthcare organizations predict and mitigate potential system failures [20]. Enterprise Risk Management (ERM) techniques, such as risk mapping and prioritizing risks by likelihood and impact, ensure that organizations stay ahead of potential issues [20].

Healthcare leaders must also consider how AI systems interact with electronic protected health information (ePHI). This includes evaluating who accesses the data and how outputs are used [23]. Implementing incident reporting systems is another key step. By logging and analyzing AI-related issues, organizations can improve system updates and processes [19].

Meeting Regulations and Standards

Thorough risk assessment must align with strict regulatory compliance to safeguard AI systems. Staying ahead of regulatory requirements is essential for healthcare leaders.

HIPAA compliance remains a cornerstone for AI in healthcare. Organizations should regularly assess how AI modifications impact the confidentiality, integrity, and availability of ePHI. Maintaining a detailed inventory of technology assets that handle ePHI is also critical [23]. Enhanced vendor oversight is equally important, with regular audits and AI-specific clauses in Business Associate Agreements (BAAs) [21].

To navigate the regulatory landscape, appointing a compliance leader is a smart move. This individual should monitor updates from the Office for Civil Rights (OCR), Federal Trade Commission actions, and state privacy laws [19][21]. Regulatory priorities vary by region. For instance, the EU AI Act emphasizes minimizing risks to fundamental rights, while U.S. guidelines focus on fostering innovation and competition [22].

Transparency and explainability are becoming increasingly vital. AI systems must avoid "black box" scenarios to build trust with patients, clinicians, and regulators [15]. Detailed documentation and audit trails of data handling and AI logic further support compliance and help organizations learn from their AI implementations [21].

As the Congressional Research Service notes:

"As a foundational issue, trust is required for the effective application of AI technologies. In the clinical health care context, this may involve how patients perceive AI technologies" [16]

Tools and Solutions for AI Risk Management

Advancements in technology have introduced tools that simplify and automate the process of managing AI risks, particularly in healthcare. These platforms transform traditionally complex workflows into more efficient systems, helping organizations protect sensitive patient data while maintaining operational stability.

Censinet RiskOps™ for AI Governance

Censinet RiskOps

Censinet RiskOps™ is a platform specifically designed to address the challenges of AI governance in healthcare. Its Digital Risk Catalog™ includes a database of over 40,000 pre-assessed vendors and products [26]. Unlike generic risk models, RiskOps™ allows organizations to customize risk assessments based on their unique patient demographics, regulatory requirements, and operational needs [27]. It also integrates with the NIST AI Risk Management Framework (AI RMF), offering a structured approach to overseeing AI systems throughout their lifecycle. Ed Gaudet, CEO and Founder of Censinet, highlights the platform's value:

"Our new release of support for NIST AI RMF gives Censinet customers an easy approach to identifying and governing risks associated with the adoption of AI technology, applications, & third-party tools." [28]

To fully leverage RiskOps™, organizations are encouraged to define clear roles and responsibilities for AI risk management and conduct thorough impact assessments [25].

Censinet AITM: Automated AI Risk Assessment

Censinet AITM

Censinet AITM takes AI risk assessment to the next level by automating the evaluation process for vendors and solutions. This tool dramatically reduces the time required for third-party risk assessments. Vendors can complete security questionnaires in seconds instead of weeks, while the platform automatically summarizes evidence, captures key integration details, identifies fourth-party risks, and generates risk summary reports. Despite the automation, human oversight remains integral through customizable rules [29].

This level of automation is essential in today’s environment, where cyber threats are escalating. Ed Gaudet emphasizes the urgency:

"With ransomware growing more pervasive every day, and AI adoption outpacing our ability to manage it, healthcare organizations need faster and more effective solutions than ever before to protect care delivery from disruption." [29]

Censinet AITM also improves collaboration among Governance, Risk, and Compliance teams by ensuring that crucial findings are directed to the appropriate stakeholders [29].

Centralized Dashboards and Real-Time Data

Real-time dashboards provide healthcare teams with predictive analytics and actionable insights, shifting management strategies from reactive to proactive. These dashboards offer visual tools to track trends and make adjustments as situations evolve [30]. For example, a healthcare provider in Washington state reduced lost cases by 20% within six months and cut labor costs by 74% over a year by integrating AI and machine learning into their dashboards [30]. This kind of performance monitoring is especially critical as nearly 75% of hospitals faced readmission penalties in 2023 [30].

In addition to dashboards, scorecards offer a broader, long-term perspective that aids in strategic planning. Experts recommend starting with high-risk or high-value AI applications and selecting metrics that align with organizational goals and risk tolerance [31]. This phased approach to real-time monitoring strengthens governance efforts, helping to protect patient data and build trust within healthcare systems.

sbb-itb-535baee

Case Studies: AI Governance Successes and Failures

When it comes to AI in healthcare, real-world examples highlight two stark realities: the devastating effects of poor governance and the immense advantages of thoughtful oversight. These cases offer valuable lessons for healthcare leaders as they navigate the complexities of AI adoption.

AI Failures: Lessons from Healthcare Incidents

Failures in healthcare AI often stem from weak governance, biased datasets, and inadequate validation processes. These shortcomings highlight the critical need for rigorous oversight.

Take Epic's Sepsis Predictor, for example. This system failed to identify most sepsis cases and overwhelmed clinical workflows with false alerts, causing more harm than good.

Another cautionary tale is IBM Watson for Oncology. Between 2013 and 2017, MD Anderson Cancer Center ended its collaboration with Watson due to the system's inability to meet the needs of both oncologists and patients [32].

A study published in Communications Medicine in March 2025 revealed troubling gaps in AI's predictive capabilities. Machine learning models designed to assess hospital mortality risks missed 66% of cases where patients were at risk of death due to injuries or complications [35].

Automation bias also poses a serious threat. Under time pressure, pathologists were found to be 7% more likely to accept incorrect AI recommendations over accurate human diagnoses [35].

Then there’s algorithmic bias, a persistent issue in healthcare AI. Some tools have been shown to favor unnecessary care for healthier white patients while neglecting sicker Black patients. One study revealed that only 18% of Black patients were flagged for care, compared to an expected 47% [35] [19].

These examples make one thing clear: responsible AI governance is non-negotiable. Without it, AI systems can - and do - fail, sometimes with life-threatening consequences. Healthcare leaders must understand that AI is not flawless and requires constant oversight to ensure patient safety [35].

These failures also serve as a stark reminder of how robust governance can turn potential disasters into success stories.

Effective AI Governance Examples

Despite the challenges, some organizations have demonstrated how proactive governance can balance innovation with safety and compliance. Their successes provide a roadmap for others.

The Mayo Clinic stands out as a leader in AI governance. It has implemented a detailed framework that prioritizes transparency, accountability, and continuous evaluation [33]. This approach includes clearly defined roles, rigorous validation processes, and ongoing monitoring to ensure AI tools perform as intended.

Another effective strategy involves forming cross-functional committees and employing zero-trust access measures to secure and validate AI applications [33] [34]. For instance, AI-integrated decision support tools in electronic health records (EHRs) have shown promise. These tools assist clinicians by pulling in external patient data, identifying risk levels, and predicting outcomes - all while maintaining a layer of human oversight [34].

AI-powered cybersecurity is another area where governance has made a difference. By combining advanced threat detection, automation, and predictive analytics with strict controls, organizations have enhanced their security measures without jeopardizing patient data or disrupting clinical workflows [34].

What sets successful implementations apart is an emphasis on governance. These organizations focus on areas where AI can deliver immediate, measurable results, opt for proven technologies, and involve stakeholders early in the process to ensure smooth adoption [36]. They also regularly assess AI performance, making adjustments as needed to maximize benefits and minimize risks [36].

Julie Sweet, chair and CEO of Accenture, captures this sentiment perfectly:

"… unlocking the benefits of AI will only be possible if leaders seize the opportunity to inject and develop trust in its performance and outcomes in a systematic manner so businesses and people can unlock AI's incredible possibilities." [35]

Ultimately, healthcare leaders must shift their perspective from "Can AI solve this?" to "Should it?" [35]. This mindset, paired with strong governance frameworks, allows organizations to harness AI's potential while safeguarding patient safety and maintaining trust in healthcare systems.

Conclusion: Building Responsible AI in Healthcare

Implementing responsible AI in healthcare demands strong leadership and a steadfast focus on patient safety. The difference between success and failure lies in effective governance.

In 2024, the AI healthcare market expanded by 42% [37]. Yet, only 16% of organizations had system-wide AI governance frameworks in place, leaving significant gaps in oversight and highlighting both risks and opportunities for those ready to act decisively.

The Congressional Research Service emphasizes the importance of trust in AI, stating:

"As a foundational issue, trust is required for the effective application of AI technologies. In the clinical health care context, this may involve how patients perceive AI technologies." [16]

Trust is not automatic - it’s built through consistent, responsible practices that prioritize safety and transparency. These principles pave the way for actionable strategies.

Next Steps for Healthcare Leaders

To address the challenges and opportunities of AI in healthcare, leaders should focus on practical, measurable steps to strengthen governance:

  • Create clear leadership roles. Designate an AI governance leader and establish an oversight board with defined responsibilities to ensure accountability.
  • Inventory AI systems. Maintain a catalog of all AI tools, with special attention to high-risk or critical applications.
  • Improve data governance. Ensure high-quality, bias-free data fuels your AI models, and implement protocols to detect and address biases in training datasets.
  • Implement real-time monitoring and auditing tools.
  • Secure AI systems. Protect against threats like data poisoning and prompt injections. Maintain a lifecycle registry for all AI applications.
  • Conduct regular impact assessments. Evaluate third-party risks and ensure compliance with regulations.

These steps provide a strong foundation for responsible AI governance in healthcare [16].

Balancing Innovation with Responsibility

While governance ensures safety, innovation must remain a priority. As Nigam Shah, Professor of Medicine at Stanford University, advises:

"Whatever you do within an organization, make sure it is fair, usable, reliable - and trust will follow." [38]

Trust also relies on ethical governance, accountable leadership, and designs that put people first [39]. Ethical governance involves equitable access to diverse datasets, safeguarding patient privacy, and addressing algorithmic bias. Accountable leadership ensures that AI projects align with organizational goals and clinical workflows, always reflecting broader societal values.

Crucially, AI should complement - not replace - the expertise of healthcare professionals. Establish clear usage protocols to avoid over-reliance on AI, and adopt phased approaches to test, refine, and scale systems. For instance, creating organizational AI registries can help monitor performance and ensure tools are used as intended [39].

Continuous improvement is key. As AI evolves, governance frameworks must adapt. Regular ethical reviews with input from diverse disciplines can uncover blind spots and guide thoughtful, value-driven design [1].

With AI expected to automate 30%-40% of healthcare tasks [39] and contribute up to $15.7 trillion to the global economy by 2030 [40], taking proactive steps today is non-negotiable. The choices made now will shape the future of healthcare - for patients, providers, and the industry as a whole.

FAQs

What strategies can healthcare leaders use to govern AI responsibly and reduce cybersecurity risks?

Healthcare leaders can take meaningful steps to manage AI responsibly and strengthen cybersecurity by establishing well-defined policies and frameworks that guide AI usage. Ensuring compliance with data privacy laws and conducting regular system audits to uncover and address biases or vulnerabilities are also key actions.

Using AI risk assessment models and real-time monitoring tools can help anticipate potential threats and improve security measures. Incorporating incident reporting systems further ensures swift responses to issues, promoting accountability and prioritizing patient safety. These measures not only safeguard sensitive data but also help build confidence in AI-powered healthcare solutions.

What is algorithmic bias in AI healthcare systems, and how can it affect patient care?

Algorithmic Bias in AI Healthcare Systems

Algorithmic bias in AI healthcare systems happens when these models generate unfair or inaccurate outcomes due to biased training data or flawed design. The consequences can be severe - ranging from misdiagnoses to unequal treatment - and often worsen existing health disparities, disproportionately affecting minority and underserved communities.

To tackle these challenges, healthcare leaders can take several meaningful steps:

  • Expand training datasets to include diverse patient demographics, ensuring broader representation.
  • Incorporate fairness-focused algorithms that actively work to minimize bias in decision-making processes.
  • Bring together multidisciplinary teams - including clinicians, data scientists, and ethicists - to guide the development and deployment of AI tools.

Taking these actions can help healthcare organizations boost the accuracy of AI systems, encourage equitable treatment, and deliver better care for all patients.

Why is building trust in AI essential for healthcare, and how can organizations ensure patients and staff feel confident using it?

Trust in AI plays a crucial role in healthcare, influencing how patients engage with their care, adhere to treatments, and experience outcomes. When both patients and healthcare staff have confidence in AI, they are more inclined to use it, which can lead to smarter decisions and better overall results.

To nurture this trust, healthcare organizations should emphasize transparency - clearly communicating how AI works and the advantages it offers. A proven history of safety and reliable performance is just as essential. On top of that, safeguarding data security, adhering to regulations, and addressing concerns raised by patients and staff can strengthen trust in these technologies. By focusing on these areas, healthcare providers can create an environment where AI supports care without eroding confidence.

Related posts

Key Points:

Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land