There’s a saying that “data is the new oil,” and just the thought of this expression highlights the value that the right data can bring to an organization. On the other hand, there’s also saying “data flows like water.” It will find its way out of its designated environment and on to systems that it wasn’t intended to be on. Even worse, it could end up in the hands of various bad actors such as hackers, insiders, or other criminals. Hence, oil and water don’t mix. Regardless of the data’s location, it’s clear the value of data itself has skyrocketed causing us to re-evaluate how we go about protecting data (both in and out of the cloud).
When we talk about securing data, it is important to understand the meaning behind both of these expressions. Additionally, when outlining a data protection strategy within a security program, you want to account for both the control’s perspective as well as the ability to support the business in their ability to execute.
A Real World Example
A few weeks ago, I was working on a project where we had conflicting principles among engineering teams that were struggling to secure data properly, while simultaneously allowing the data science team to work with the data inside their development environment. From a security perspective the answer was clear cut, however implementing it without impacting the data science team muddled the implementation.
In our modern cloud-heavy world, where environments can be brought to life in a few minutes, security teams need to shift from just focusing on implementing controls to also providing guardrails for others within their organization to innovate. This can be accomplished by aligning the requirements from privacy, security, and business teams.
Getting to a place of agreement among the teams often requires an analysis of the controls that are needed to address the business use cases and an understanding of how to manage your security program in order for your organization to extract the value that it hopes to find among the data. Let’s take a look at a few of these controls and see how they are implemented effectively within existing traditional infrastructure and cloud infrastructure.
Flexible Controls for Business Agility
One of the most important objectives of any security program is to keep developers, builders and engineers out of production environments while development is underway. What critical steps do you need to take in order to accomplish this? You must find the balance between security and usability.
The intent of any control, in this case, is to prohibit development changes that can impact production, while providing just enough access to the data to enable a develop to complete their tasks. So how do we accomplish that? The answer lies in maintaining production controls in a lower environment that leverages “dummy” data for the purpose of testing and verification.
On paper, many times we talk about production being a single environment, but in practice we often have a plethora of environments from development to production and everything in between. However, not all (production) environments are created equal. One might require a specific data set with more rigorous controls, while another doesn’t retain sensitive data so a different set of controls are applied.
If we take the time to thoroughly understand and map out the controls, we’re able to create an environment that still meets data confidentiality and privacy considerations. The controls for data security and data science should allow engineers to be able to perform their modeling and work in a safe, but secure, environment. However, because this is a simulated production environment, capabilities like availability, regular back ups, and tiered storage aren’t required. This reduces the operational overhead without impacting the business or compromising on security.
Summary
In many organizations creating environments like this can be quite difficult, complex, and expensive. The requirements are too vague and in many cases, the binary understanding of production vs simula