The current situation with legacy on-premises security defenses trying to support a hybrid work environment and zero trust principles is challenging for companies. Complications can include poor user experience, complexity of disjointed solutions, high cost of operations, and increased security risks with potential data exposure. Simple allow and deny controls lack an understanding of transactional risk to adapt policy controls and provide real-time coaching to users. The implication of continuing down this path is poor use of resources, limited business initiatives, frustrated users, and an increased potential for data exposure and regulatory fines. The binary option of blocking new applications and cloud services impedes digital transformation progress, and an open allow policy lacks data protection for transactional risk and real-time coaching to users on risky activities.
For example, generative AI has had a firestorm of attention and security teams are reacting quickly as employees, contractors, and business partners leverage the value and economies of scale from this new SaaS application. However, there are risks around what data gets provided to generative AI applications like ChatGPT and Gemini. The popularity also provides an opportunity for cyberattacks to phish, lure, and collect data from oblivious users. Generative AI is a good use case to illustrate the value of zero trust principles with SSE to provide safe access, protect data, and prevent threats related to its popularity.
Blocking access to generative AI apps via specific domains or a URL category prevents progress and advancements from using it across many functional areas of an organization, including developing code, creating marketing content, and improving customer support. However, openly allowing access to generative AI apps puts sensitive data at risk from uploads for