
The past 12 months have completely reshaped how AI shows up in healthcare, with unauthorized use not merely creeping in but surging. This year, around 60% of users were using generative AI tools outside IT oversight, according to the 2025 Netskope Healthcare Threat Labs report.
In response, leaders are shifting from resisting the tide to attempting to manage it, with the same research showing 88% of healthcare organizations now actively integrating generative AI into clinical and operational workflows. They’re catching up to a simple fact: AI isn’t emerging anymore, it’s embedded in day-to-day work.
Yet this is just one strand of a broader transformation: Many health systems have simultaneously shifted away from locked-down Citrix setups toward more flexible Windows-based workflows, and in doing so, they’ve also dramatically widened their attack surface.
It’s exactly this shift that sat at the heart of the recent Securing Patient Care with Netskope One: Risk, Compliance and Continuity webinar. Across the session, speakers unpacked the three tensions shaping healthcare security right now:
- Risk: Can you see how AI, SaaS and data are changing your attack surface?
- Compliance: Can you demonstrate good governance without shutting down the tools care teams rely on?
- Continuity: Will patient care keep moving even when something breaks or you’re hit with ransomware?
Taken together, these tensions define the new reality security teams are working in, and frame the path forward.
Risk: Can you see how AI, SaaS and data are changing your attack surface?
For most healthcare organizations, the widespread use of AI isn’t the real problem. The problem is that AI is everywhere and largely invisible to the controls they already have in place.
Staff are using AI tools in ways that never touch a VPN or legacy gateway, so none of it sits neatly inside the traditional perimeter. And because most healthcare systems rely on a tangle of point products—each one seeing only its own slice of activity—this usage rarely shows up in a complete, reliable way.
Compounding this is the fact that attackers are already using generative tools to move faster, from creating more convincing phishing lures to automating early-stage reconnaissance. One recent Harvard Business Review study found that AI-driven phishing now has a 60% success rate.
If you can’t see who’s doing what with which data, you’re not managing risk—you’re guessing.
This is why a unified data-first, SASE and zero trust model is becoming essential in healthcare. Instead of accepting fragmented visibility across web gateways, CASBs, DLP tools and VPNs, a unified, in-line architecture gives you a single lens across web, SaaS, private apps and cloud services.
That unified view gives you the context you need, bringing user identity, device posture, app, activity and data sensitivity together to build a truthful picture of what’s actually happening.
And in clinical settings, where shared workstations are still the norm, that context becomes crucial. A single desktop in a ward might see 10 different users tap in and out over a shift, yet a typical legacy control sees one machine and one generic account.
With identity-aware integrations (for example, linking zero trust policies to solutions such as the Imprivata tap and go model) policies can finally follow the individual clinician, not the physical device they happen to be standing at.
So, the same workstation can produce three entirely different experiences:
- Broad access for a staff nurse who needs approved SaaS tools
- Additional privileges for a registrar managing departmental accounts
- The bare minimum for a contractor who only needs limited access
The same logic applies to AI use. A zero trust model doesn’t rely on blunt allow/deny rules. Policies adjust in real time based on the sensitivity of the data and the context of the action. A clinician using a sanctioned AI assistant with anonymized data can proceed, but the moment someone tries to paste PHI into a public model, they might be coached, blocked or redirected to a governed corporate instance.
The goal isn’t to shut down AI use, it’s to make it visible, governed and safe.
And once that lens is in place, long-standing blind spots across the wider estate come into view:
- Shadow SaaS growing around the EHR
- Research teams syncing data into unmanaged cloud apps
- Third-party vendors accessing private apps from unmanaged devices
This doesn’t require a massive, all-at-once transformation. Most organizations start with the basics: replacing a brittle VPN, securing users as Citrix is decommissioned, or putting modern controls around browser-based access. Once that slice of risk becomes visible, it becomes governable, and far easier to extend across the wider app estate.
Compliance: Can you demonstrate good governance without shutting down the tools care teams rely on?
Amid the rapid adoption of AI tools over the past few years, it’s important to remember that regulations haven’t softened, nor have expectations shifted. So, compliance teams haven’t had the luxury of easing into the AI era. Auditors don’t care how quickly AI arrived; they still want the same answers they always have:
- Who touched sensitive data?
- Where did it go?
- How was it protected?
The trouble is that modern workflows rarely align with the design of traditional DLP and governance tools, because PHI no longer resides in neatly formatted text fields. It lives in PDFs, clinical images, scanned documents and mixed media—formats legacy pattern-matching tools were never built to understand.
The early instinct to “just block generative AI” created more problems than it solved. Users found ways around restrictions, compliance teams lost visibility and organizations were left with policies that looked strict on paper but didn’t reflect reality in practice.
A modern compliance approach has to bridge the gap between ambition and behavior. And that starts with recognizing that governance is about shaping AI or SaaS use, not stopping it.
With data-first architecture and unified controls in place, compliance teams can move from static rules to governed enablement:
- Allowing low-risk interactions with sanctioned AI tools
- Coaching users when they hit grey areas
- Automatically blocking or redirecting high-risk actions, such as pasting PHI into public models
- Capturing prompts, responses and decisions as evidence of responsible use
With this shift, healthcare organizations can move toward continuous proof rather than retrospective justification. So, instead of scrambling for screenshots and log exports every time an auditor appears, teams can show live, consistent enforcement across web, SaaS, private apps and cloud services—all mapped back to the same data definitions and policies.
This model also eases pressure on overstretched teams by letting data policies, classification and access controls be defined once and applied everywhere, with partners able to handle tuning and reporting when needed.
And it works far beyond AI, covering clinical images, research datasets, collaborative SaaS use and hybrid workflows across on-prem and cloud environments. The result is continuous, predictable governance that doesn’t block the tools clinicians rely on.
Continuity: Will patient care keep moving when something breaks or you’re hit with ransomware?
In healthcare, continuity is a clinical metric, not an IT one. When systems go down, it’s patient care that’s affected: appointments back up, imaging is delayed, meds can’t be administered on time and clinicians are forced into slow, manual workarounds.
Yet, with the environments and critical apps care teams rely on now spanning web, SaaS, private data centers, cloud workloads and remote clinics, clinicians must move between them constantly. And if access is fragile, slow or inconsistent, they find workarounds—which introduce new risks and make recovery harder when something finally does break.
That’s why a modern continuity strategy must assume that things will go wrong. And whether networks degrade, credentials become compromised, a vendor goes offline or ransomware hits, the security model must keep workflows moving anyway.
This starts with creating consistent, high-performance access no matter where the user is or where the application lives. When security inspection happens in-line and close to the user, performance stays stable rather than collapsing behind a central choke point. And when clinicians don’t experience security as friction, they’re less likely to bypass controls under pressure.
Endpoint resilience is another core component. Security teams need to centrally manage and (when needed) lock down clinical workstations, carts and shared devices quickly. Hardened, immutable endpoints—often delivered through thin clients or controlled OS layers—reduce the blast radius when issues occur and make recovery far less chaotic.
Continuity also hinges on catching issues early and recovering cleanly. When security tools can detect suspicious behavior across web, cloud or lateral movement, they can isolate activity before ransomware spreads. Feeding those signals into backup and recovery workflows helps teams decide what data to trust and how far back to roll without reintroducing risk.
And as organizations retire legacy access tools, migrate workloads or consolidate after mergers, a unified zero trust access layer keeps policies consistent and care teams productive, providing much-needed stability during change.
These capabilities make continuity predictable, but they also point to the larger shift healthcare security now requires. Reducing risk, proving compliance and maintaining continuity aren’t separate projects anymore, but expressions of the same need: to see what’s happening across your environment and shape it in real time.
The organizations that meet these needs best will reduce risk, simplify governance and maintain continuity where and when it matters most.
For a deeper dive, watch the on-demand webinar here.

Read the blog