Co-authored by James Robinson and Jason Clark
No sooner did ChatGPT and the topic of generative artificial intelligence (AI) go mainstream than every enterprise business technology leader started asking the same question.
Is it safe?
At Netskope, our answer is yes—provided we are doing all the right things with sensitive data protection and the responsible use of AI/ML in our own platforms and products, and effectively conveying an understanding of that use to our customers, prospects, partners, and third- and fourth-party suppliers to help build programs that are governance-driven.
The managed allowance of ChatGPT and other generative AI tools is a necessity. Organizations that simply “shut off” access to it may feel initially more secure, but are also denying ChatGPT’s many productive uses and putting themselves—and their entire teams—behind the innovation curve.
Managed allowance of ChatGPT
Netskope has been deeply focused on the productive use of AI and machine learning (ML) since our founding in 2012, and many AI/ML innovations—dozens of them patented—are already part of our Intelligent SSE platform. Our Netskope AI Labs team routinely discusses AI/ML and data science innovation with Netskope customers and our internal community.
Like everyone, we’ve just observed an inflection point. Before November 2022, if you weren’t a security practitioner, developer, data scientist, futurist, or technology enthusiast, you likely weren’t doing much with generative AI. But what’s happened since the public release of ChatGPT is that these services and technologies are now available for a layperson to access. Anyone with a browser today, right now, can go in and understand what ChatGPT can do.
When something is so pervasive that it becomes the dominant topic of conversation in business and technology this quickly—and ChatGPT definitely has—leaders have essentially two choices:
- Prohibit or severely limit its use
- Create a culture where they allow people to understand the use of this technology—and embrace its use—without putting the business at risk
At Netskope, for those on our team who should be allowed access to ChatGPT, we today enable responsible access. Here at the dawn of mainstream generative AI adoption, we’re going to see at least as much disruptive behavior as we did at the dawn of the online search engine decades ago, and where we saw different threats and a lot of data made publicly available that really should not have been.
But remember: now, as then, the grand strategy of security is to protect sensitive data from getting accessed by sources that shouldn’t have access to it. Today, with ChatGPT and other generative AI applications, this can be done with the right cultural orientation—that is, allow it responsibly—combined with the right technology orientation, meaning modern data loss prevention (DLP) controls that prevent misuse and exfiltration of data, and are also part of an infrastructure that enables teams to respond quickly in the event of that data’s misuse.
A recent blog, “Modern Data Protection Safeguards for ChatGPT and Other Generative Applications,” touches on how the Netskope platform specifically–and our modern DLP—helps prevent the cyber risks inherent in generative AI applications. Read it for a deeper dive, but to summarize: your DLP needs to be able to set policies at the level of “this should never go out”—data sets or cohorts that would be dangerous to the business if exposed—and “this shouldn’t go out,” meaning data sets or cohorts that you don’t want compromised but that would not be material to business disruption.
Netskope advanced DLP can automatically identify flows of sensitive data, categorizing sensitive data with the most exacting precision. That includes AI/ML-based image classification and the ability to build custom ML-based classifiers, plus real-time enforcement—applicable to every user connection—that combines the selective stopping of sensit