X Close Search

How can we assist?

Demo Request

Shadow AI: Finding and Securing Unauthorized Models in Your Organization

Post Summary

Shadow AI refers to employees using AI tools without IT approval or security oversight, creating serious risks for healthcare organizations. These tools, often used for tasks like drafting clinical notes or managing communications, bypass essential safeguards, exposing sensitive patient data to potential breaches and compliance violations.

Key takeaways:

  • Data risks: Unauthorized AI tools process and retain patient data, increasing exposure to breaches and HIPAA violations.
  • Cost impact: Breaches linked to Shadow AI cost an average of $670,000 more than other incidents.
  • Detection challenges: IT teams detect fewer than 20% of these tools, leaving organizations vulnerable.
  • Operational risks: Unvetted AI outputs can lead to errors in clinical decisions and disrupt workflows.

To address these risks:

  1. Implement network monitoring and endpoint scanning to track AI usage.
  2. Use Data Loss Prevention (DLP) tools to block sensitive data transfers to unapproved AI platforms.
  3. Educate staff on approved tools and AI policies through targeted training.
  4. Establish AI governance councils to oversee adoption and ensure compliance.
  5. Leverage platforms like Censinet RiskOps™ for real-time risk management and vendor evaluations.

The bottom line: Shadow AI is a growing challenge in healthcare, but proactive detection, clear policies, and approved alternatives can mitigate risks while maintaining efficiency.

Shadow AI Statistics: Detection Rates, Costs, and Risk Metrics in Healthcare

Shadow AI Statistics: Detection Rates, Costs, and Risk Metrics in Healthcare

Don’t Say No, Say How: Shadow AI, BYOD, & Cybersecurity Risks

Key Risks of Shadow AI in Healthcare Organizations

Shadow AI poses three main risks to healthcare organizations: cybersecurity vulnerabilities, regulatory compliance failures, and threats to patient safety. These risks are deeply interconnected, and addressing them is essential to safeguard data, maintain compliance, and protect patients.

Cybersecurity and Data Breach Risks

When healthcare staff use unauthorized AI tools to handle patient data, they inadvertently expose sensitive information to systems that actively process, store, and potentially learn from it. As Eric Vanderburg, a cybersecurity executive, points out:

Shadow AI systems actively process, retain, and potentially learn from submitted data rather than simply storing or transmitting it [2].

This creates a unique challenge. Once Protected Health Information (PHI) is absorbed into an AI model's training set, it becomes almost impossible to remove. As SentinelOne explains:

You cannot request deletion from a neural network the way you can delete a file from a server [1].

The financial toll of these breaches is steep. Shadow AI-related data breaches cost an average of $200,000 more than other types of security incidents [3]. Alarmingly, 97% of organizations affected by these breaches lacked proper AI access controls at the time of the incident [1].

Traditional security tools like Data Loss Prevention (DLP) and Cloud Access Security Broker (CASB) systems often fail to detect these risks because Shadow AI operates through encrypted conversational streams over HTTPS. This lack of oversight leaves clinical data vulnerable to exploitation by cybercriminals, who can use it to create targeted phishing attacks or even deepfake scenarios. These cybersecurity gaps inevitably lead to compliance challenges.

Compliance and Regulatory Violations

Unauthorized AI use doesn't just expose data - it also creates serious compliance issues. For example, processing patient information through unsanctioned tools can lead to HIPAA violations. These tools typically lack Business Associate Agreements (BAAs), which are critical for governing how patient data is handled, stored, and protected.

The financial impact of such violations is staggering. In 2025, the average healthcare data breach cost reached $7.42 million [3]. Shadow AI incidents add $200,000 on average to these costs and account for 20% of all breaches - 7 percentage points higher than breaches involving approved AI tools [3]. Moreover, detecting and containing Shadow AI-related breaches often takes about a week longer than sanctioned AI incidents, increasing regulatory exposure [3].

Suja Viswesan, Vice President of Security and Runtime Products at IBM, underscores the broader consequences:

The cost of inaction isn't just financial, it's the loss of trust, transparency and control [3].

Beyond immediate penalties, compliance failures can erode accreditation, damage relationships with payers, and jeopardize reimbursement eligibility - issues that can have long-term effects on an organization's reputation and financial stability.

Patient Safety and Operational Impacts

Shadow AI also introduces significant clinical risks. Unvetted AI tools can produce "hallucinated" outputs - plausible but inaccurate information that may influence medical decisions in harmful ways. This is a top concern for healthcare professionals, as survey respondents consistently rank patient safety as the most pressing issue tied to Shadow AI [2].

These inaccuracies can disrupt workflows and lead to misguided clinical decisions. Additionally, reliance on unsanctioned AI for essential tasks like scheduling, documentation, or revenue cycle management creates operational vulnerabilities. Since IT teams cannot monitor or support these tools, they become invisible risks - potential single points of failure that could disrupt critical operations without warning.

With nearly 20% of healthcare workers admitting to using unauthorized AI tools for personal tasks [2], the operational risks extend far beyond what organizations can see or control. This lack of visibility makes it even harder to mitigate potential fallout, leaving healthcare systems exposed to unnecessary risks.

How to Detect Shadow AI Models in Healthcare

Shadow AI poses a challenge because it often operates under the radar. As Ashley Lejserowits from Prompt Security explains:

Shadow AI is first and foremost a visibility issue. You can't secure what you can't see, and right now, most companies are operating in the dark [5].

The numbers paint a concerning picture: 76% of organizations deal with unauthorized AI usage, yet IT departments typically detect fewer than 20% of around 70 unauthorized AI tools identified during an audit [5] [6]. Addressing this issue requires a multi-layered strategy that combines monitoring at various levels: network, endpoint, and user behavior.

Network Traffic Analysis and Endpoint Scanning

A key starting point is passive network monitoring, which observes traffic already flowing through your systems without requiring extra software installations on devices. Logs from systems such as SIEM tools, web proxies (like Zscaler or Netskope), and firewalls can be used to track AI-related activity. For example, monitoring DNS queries and API endpoints like api.openai.com, api.anthropic.com, or generativelanguage.googleapis.com can help identify AI usage.

For more precise detection, URI path pattern matching can reveal the specific models being accessed. This could include identifying models like "anthropic.claude-3-opus" on AWS Bedrock or specific GPT versions on Azure OpenAI. This level of detail not only confirms AI usage but also provides insight into which models are being utilized.

However, most AI traffic is encrypted using HTTPS, which limits visibility. To address this, tools like TLS-intercepting proxies or API gateways can analyze metadata. Still, these methods may miss tools such as AI-enabled browser extensions or IDE plugins, which often operate outside standard network monitoring. These tools can transmit sensitive data to external servers, creating a gap in visibility.

This is where endpoint and browser auditing becomes critical. Regular audits, such as quarterly reviews of browser extensions on managed devices, can help identify unauthorized AI tools that bypass network-level monitoring. By combining these audits with data flow monitoring through DLP tools, organizations can expand their detection capabilities.

Data Loss Prevention (DLP) Tools

DLP tools add another layer of security by monitoring sensitive data flows, especially when it comes to detecting unauthorized transmission of PHI (Protected Health Information) or PII (Personally Identifiable Information). Integrating DLP tools with CASB systems allows organizations to flag unusual sensitive data transfers to AI endpoints, helping to prevent breaches that could cost an average of $670,000 extra per incident [5].

To strengthen these defenses, configure DLP systems to:

  • Detect and block large text transfers (over 500 tokens) into unauthorized AI tools.
  • Flag sensitive content being pasted into unapproved AI interfaces.
  • Monitor and alert on unusual data movements toward known AI endpoints.

These measures, combined with data classification policies, help mitigate risks. Additionally, analyzing user behavior can uncover covert Shadow AI use.

User Behavior Analytics

User behavior analytics can identify early signs of Shadow AI activity by focusing on high-risk departments like Finance, Legal, HR, or clinical teams handling sensitive data. Specific behavioral patterns to watch for include:

  • Unusual Usage Volume: Tools generating significantly higher traffic than average may indicate deep integration into workflows.
  • Access by High-Risk Departments: Employees working with regulated data using unapproved AI tools.
  • Repeated Access Attempts: Persistent efforts to access blocked AI domains despite existing restrictions.

The first instance of a user accessing a known AI tool - or the appearance of a new AI tool on the network - can signal Shadow AI expansion [4]. Setting alerts for these patterns allows compliance teams to act quickly. Often, the discovery of unauthorized AI use serves as a trigger for automated user coaching, guiding employees toward approved alternatives. This non-punitive approach not only improves compliance but also fosters trust [5].

Interestingly, 54% of employees admit they would use AI tools without company approval, with this number climbing to 62% among Gen Z and Millennial workers [5]. The goal of detection isn’t to penalize employees but to create visibility and guide them toward safer, authorized options - helping prevent breaches before they happen.

Managing Shadow AI Risks with Censinet RiskOps

Censinet RiskOps

Once Shadow AI is identified, the next step is to tackle its risks head-on. This is where Censinet RiskOps™ comes into play. Designed specifically for healthcare, this risk management platform transforms how organizations assess, handle, and minimize AI-related risks.

What sets Censinet RiskOps™ apart is its network-driven approach. Acting as a risk exchange, it connects healthcare organizations and vendors to share cybersecurity insights and risk assessments. This collaborative model means you don’t have to start from scratch when evaluating AI vendors or models. Instead, you can tap into the experiences and findings of others in the network, streamlining the process of identifying and mitigating risks.

Enterprise Risk Assessments with Censinet RiskOps™

Censinet RiskOps™ simplifies the risk assessment process through automation. The platform builds an inventory of AI tools, evaluates them using standardized questionnaires, and generates prioritized remediation plans. This approach has been shown to reduce risks by 65% within six months, potentially saving millions of dollars in breach-related costs [8][12].

One standout feature is the healthcare-specific risk scoring system. Modeled after a FICO score, it ranges from 300 to 850, providing a clear way to quantify risks. This scoring system makes it easier to communicate priorities to leadership and align on risk management strategies.

AI Vendor Management with Censinet AITM

Censinet AITM

Managing risks isn’t just about internal systems - it’s also about external vendors. That’s where Censinet AITM (AI Third-Party Management) steps in. It streamlines the vendor risk management process, from onboarding to ongoing monitoring and risk scoring.

For example, traditional assessments might miss risks introduced when a radiology software vendor integrates a third-party AI model. Censinet AITM automates these evaluations and maps supply chain dependencies, ensuring nothing slips through the cracks. In one case, the system flagged a radiology vendor’s unvetted subprocessors, preventing a potential data leak in a mid-sized clinic network [9]. Another example: a U.S. health system used Censinet AITM to assess over 50 AI vendors, uncovering that 20% lacked adequate data encryption. This led to renegotiated contracts, avoiding breaches before they could occur [10]. Additionally, real-time alerts helped another organization detect a vendor’s Shadow AI integration, preventing a compliance violation under HITRUST standards [10].

AI Risk Dashboards for Real-Time Visibility

The AI risk dashboards in Censinet RiskOps™ provide a centralized, real-time view of Shadow AI risks. These dashboards are packed with interactive features, including AI inventories, risk heatmaps, trend analytics, and alert feeds. Key metrics displayed include risk scores (on a 0–100 scale), PHI exposure levels, compliance gaps, and remediation progress.

One dashboard example revealed over 200 unauthorized API calls to external LLMs in a single week, prompting swift action [11]. Users can drill down into endpoint-level details to see which departments are using specific models and how sensitive data flows across systems. Visualizations like pie charts, line graphs (formatted in MM/DD/YYYY), and geo-maps provide a clear picture of vendor locations and data usage.

As Brian Sterud, CIO at Faith Regional Health, explains:

Benchmarking against industry standards helps us advocate for the right resources and ensures we are leading where it matters [7].

These dashboards don’t just highlight risks - they turn data into actionable insights, enabling governance committees and executive leaders to make informed decisions that address issues proactively.

How to Secure and Govern Shadow AI in Healthcare

Reducing the risks of Shadow AI - whether they're tied to cybersecurity, compliance, or operations - requires a proactive approach. Once detection is in place, the next steps involve offering approved tools, establishing clear oversight, and building a workplace culture that prioritizes both security and clinical efficiency.

Deploy Approved AI Alternatives

Did you know that faster workflows drive nearly half (45%) of healthcare providers to use unauthorized AI tools [13]? When organizations fail to provide options that meet clinical needs, staff often take matters into their own hands. That’s where solutions like Censinet Connect™ come in, offering pre-vetted AI tools designed to meet these needs. This allows clinicians to access approved tools quickly, without the long delays often associated with traditional IT approval processes.

To make these tools more effective, involve experienced clinical stakeholders in their selection process. Structured AI declaration forms can streamline requests for new tools, ensuring clinicians feel heard and supported. When healthcare professionals are part of the decision-making process, they’re far more likely to adopt the approved solutions. These tools then become the foundation for broader oversight, bolstered by structured governance practices.

Establish AI Governance Councils

AI oversight shouldn’t rest solely on the shoulders of IT teams. Instead, effective governance requires collaboration across departments, bringing together clinical leaders, compliance officers, IT security experts, and operations professionals. Regular meetings of these governance councils provide the structure needed to monitor AI adoption, address emerging risks, and update policies as technology evolves.

Greg Samios, CEO of Wolters Kluwer Health, highlights the importance of this approach:

GenAI will provide an increase in staff efficiency and care quality, but we must preserve safety and clinician-patient relationships by reframing workflows that elevate GenAI from a tool to a partner [13].

These councils play a crucial role in ensuring AI use aligns with organizational goals and patient safety standards. They also review new AI tool requests, reducing the likelihood of Shadow AI creeping into workflows.

Education and Technical Controls

Education is the backbone of any governance strategy. Role-specific training helps staff understand AI policies in a way that’s directly relevant to their work. For example, training for clinical staff should differ from sessions designed for administrative teams. Consider offering annual training sessions and additional ones whenever policies change. Rather than punishing unauthorized AI use, focus on educating staff to uncover usage patterns and identify gaps in technology.

Technical controls add another layer of protection. Implement phishing-resistant authentication methods, like passkeys, to guard against credential theft. Use firewalls and endpoint monitoring to block unauthorized AI platforms, and ensure automated logging is in place for all approved AI endpoints. Offering sandbox environments for secure AI testing can also help mitigate risks. Together, these measures strengthen your organization’s defenses and close potential security gaps.

Conclusion

Shadow AI is here to stay, but healthcare organizations can regain control with the right strategies. Consider this: IT teams typically detect fewer than 20% of AI tools used by employees, audits uncover roughly 70 unauthorized applications per system, and breaches linked to shadow AI add an average cost of $670,000 per incident [5].

Tackling this issue demands early detection, active oversight, and real-time enforcement. By integrating network analysis, endpoint monitoring, and user behavior analytics, organizations can not only identify risks but also implement effective governance and enforcement measures. As Ashley Lejserowits from Prompt Security explains:

Shadow AI is first and foremost a visibility issue. You can't secure what you can't see, and right now, most companies are operating in the dark [5].

Improved visibility is the foundation for centralized risk management. Tools like Censinet RiskOps™ provide real-time dashboards and alerts, enabling healthcare organizations to act on risks immediately. Paired with Censinet AITM for vendor evaluations and Censinet Connect™ for accessing pre-screened AI tools, these solutions allow organizations to transition from reactive problem-solving to proactive governance.

Static policies simply don’t cut it. While 34% of companies report governing AI usage, only a fraction conduct real-time audits, leaving a dangerous gap between policy and practice [5]. Healthcare organizations need to move beyond paperwork and embrace active enforcement. After all, as the saying goes, "Governance without observability is just paperwork" [5].

FAQs

How can we quickly confirm if PHI is being sent to an unauthorized AI tool?

To check if Protected Health Information (PHI) is being transmitted to an unauthorized AI tool, you can rely on passive monitoring methods. By analyzing network and activity logs, you can spot signs of unapproved AI usage. Using visibility tools to track all network traffic directed to AI services is particularly useful. These tools can help identify unapproved API calls or data transfers. Carefully reviewing these logs can uncover unauthorized PHI transmissions, enabling swift investigation and containment of any potential risks.

What’s the first step to contain Shadow AI without disrupting clinical workflows?

To start, it’s crucial to have a clear view of all the AI tools being utilized within your organization. Right now, many companies struggle with oversight - less than 20% of AI tools are actively monitored. By establishing this visibility, you can pinpoint any unauthorized models in use while ensuring clinical workflows remain seamless and unaffected.

What should an AI governance council own day to day?

An AI governance council plays a crucial role in the daily management of AI systems. This involves clearly defining roles, responsibilities, and processes to ensure these systems are used ethically and remain compliant with regulations.

Their key tasks include:

  • Maintaining an AI inventory: Keeping a detailed record of all AI systems in use within the organization.
  • Monitoring performance: Regularly tracking AI systems to ensure they function as intended and meet established benchmarks.
  • Managing risks: Identifying and addressing potential risks associated with AI deployment, including unintended consequences.
  • Overseeing policies: Enforcing policies related to data privacy, cybersecurity, and compliance to protect sensitive information and uphold legal standards.

In addition to these responsibilities, the council conducts regular audits and bias assessments to ensure fairness and accountability. They also develop incident response plans to address issues promptly and effectively. By fostering collaboration across various teams, the council ensures that AI usage aligns with both organizational goals and regulatory requirements.

Related Blog Posts

Key Points:

Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land