Cybersecurity has a bad rap for getting in the way of business. Many CIOs & CISOs dedicate a lot of time to minimizing security solutions’ performance drag on their network traffic while ensuring that the solutions continue to do their job keeping the network secure. The move to the cloud exacerbates this challenge.
A few years ago, a security team would install security services on a series of physical appliances. Firewall, URL filtering, email monitoring, threat scanning, and data loss prevention (DLP) functions, for example, might each run on their own box. The five appliances might be configured serially, such that a data packet would flow into one, the appliance would perform its standard service, then the packet would move on to the next appliance, which again would go through all its standard steps. The scalability of each service would be limited by the space available on its physical appliance. And when the hardware was maxed out, performance of the security checks—and by extension, performance of network traffic—would slow down. These challenges only became exacerbated with encrypted traffic flows and the need to decrypt, scan, and then re-encrypt traffic multiple times, for each function.
Many customers attempted to improve scalability by shifting to virtual appliances, only to run into the same “bottlenecking” issue. Whether a solution is running in the cloud or on-premises, virtualization requires administrators to assign specific resources, including CPU, memory, and disk space. Some security platforms consolidate a range of different services. This gives the suite of solutions access to more resources in aggregate, but the services have to compete for that finite quantity of all available resources, and ultimately performance is not optimized for any of them. Inherent to the design, this resource “tug of war” ultimately forces trade-offs between security processing and performance.
Whatever the approach, physical, virtual, or cloud-based approaches typically only have so much room to scale horizontally. After that point, resource limitations introduce latency to the performance of the solutions they house. A security infrastructure operating through a traffic pipeline with a fixed diameter is eventually going to hit those limitations and bottlenecks, and the speed of the network will suffer and ultimately this translates into a degraded user experience, and in the worst possible case, the risk of users bypassing security controls altogether which exposes organizations to risk.
Loosely coupled but independent microservices
As Netskope developed what is now our secure access service edge (SASE)-ready platform, we designed the architecture with the goal of overcoming latency that degrades the performance of traditional security solutions. To reach that goal, we rethought two aspects of how security technology fundamentally operates.
First, we consolidated key security capabilities into a single unified platform, while simultaneously abstracting out individual security functions into what we call at Netskope “microservices.” Processes such as data loss prevention (DLP), threat protection, web content filtering, and Zero Trust Network Access (ZTNA) run independently, each with its own resources. When resource limitations begin impacting the performance of one of the microservices, the Netskope Security Cloud is designed to automatically scale up (or out) that microservice by independently releasing the required resources.
For example, SSL interception is most likely to be limited by system input-output (I/O), trying to decrypt traffic it receives off the network. While TLS/SSL session setup is well-understood to be bound by the central processing u