“Zero trust” still confuses people—and for good reason. While the term conveys a certain absolute authority (“zero,” “nope,” “nothing”), contemporary approaches offer much more nuanced capabilities. And while zero trust today is typically associated with security initiatives, the concepts have their origin in the definition of network perimeters, who is granted access, and how that access is provided.
The evolution of security hasn’t been from implicit trust to no trust, but rather toward contextual controls that grant the right people the right access to the right resources at the right time for the right reasons. But ultimately, making sense of zero trust requires an understanding of how the role of networking and infrastructure has shifted with respect to the critical objectives of security in recent years.
The changing role of the network: a brief history
In the earliest days of building networks and defining the enterprise perimeter, all companies were essentially islands. They built corporate networks to facilitate interactions between employees and data that was all on-premises. When the internet came along, everyone wanted to get in on that. But businesses realized fairly quickly that the internet’s default implicit trust was going to cause problems when it came to protecting themselves from outsiders with malicious intent.
The first natural step was to use the network to create demarcation points. Architectures evolved to include something called a DMZ, which has a similar function as physical-world demilitarized zones (such as the 2.5-mile wide strip of land between North Korea and South Korea; the natural isolation of which created an involuntary park now regarded as one of the most well-preserved areas of temperate habitat in the world). This kind of “castle and moat” architecture actually worked for a long time. But then as businesses evolved and required constant connectivity with other businesses, partners, suppliers, and even their own customers in certain circumstances—new patterns were required
There were many attempts at creating these new patterns over the years. The Jericho Forum promoted de-perimeterization in the early 2000s. A few years later, I wrote about “the death of the DMZ” and delivered some Microsoft TechEd presentations where I advocated to authenticate every person and system, authorize all actions and behaviors, audit every activity and transaction, and encrypt where necessary. (Though today, I would change that last one to encrypt all the time.)
Then zero trust networking came along. This was useful—but it was still thinking more along the lines of gating access to networks. Google’s BeyondCorp initiative proposed “What if people were always on the internet, even if they’re in the office?” They really only get an internet connection; all requests to interact with applications must flow through some kind of a broker. Then Forrester’s Zero Trust eXtended (ZTX) came along. Gartner offered their own early take on the concept, which was called CARTA (continuous adaptive risk and trust assessment).
Then the emergence of the software-defined perimeter architecture made the zero trust concept much more relevant. The software-defined perimeter hooked people to applications—regardless of what the underlying network infrastructure was like. This opened up new possibilities.
As a result, the zero trust network access (ZTNA) market soon emerged. Right now, we’re seeing a lot of emphasis on ZTNA—which came out of COVID, when everyone suddenly had to work from home. While most enterprises initially tried to expand their VPN to cope with this immediate shift, what they found was that their VPN concentrators were brittle. They hadn’t been updated or patched in a while which meant they could be vulnerable. But even if companies could safely expand their VPN concentration capabilities, they were still running up against bandwidth constraints from backhauling traffic to their facilities for security. And it was especially inefficient because most of that traffic needed to hairpin right back out again to software-as-a-service (SaaS) applications or the web.
Letting the network do what it does best
ZTNA transcended the constraints of VPNs. I was wrong in 2019 when I said that ZTNA would replace VPN. I changed my view in 2020 to say ZTNA would augment VPN because there are still legitimate reasons when someone needs a VPN to access the network (such as for network administration or when doing performance analysis). But for the vast majority of cases, you don’t need to be on the network—you just need access to an application. ZTNA gives you that access without any sort of reliance on the underlying network architecture.
This uncoupling (and unburdening) meant that networkers didn’t need to concern themselves with individual access control policies for applications. The application owner creates policies and defines who’s allowed to interact, as well as the conditions that indicate the level of access (e.g., full, reduced, isolated, none). That responsibility no longer has to be dumped on the networker—who may not be equipped to make those decisions for applications anyway.
Simultaneously, networkers could now focus more on the things they do really well—such as high availability, ensuring that the network is reliable, and that it performs well at getting bits where they need to go. In addition, the network team can also help application owners ensure more consistent experiences. People interact with applications in exactly the same way, wherever they happen to be—in the office, in a coffee shop, on an airplane, or at home.
The curious case of IoT discovery
Let’s ponder the Internet of Things (IoT) for a moment. If a company claims to have solved the problem of discovering all their IoT devices, I would argue that’s a claim of ignorance. Most companies possess a limited understanding of the IoT devices in their networks today. Poor visibility is the root—and the roots plunge far.
Beyond finding all those devices, effective visibility requires figuring out what they’re talking to and what