Shamla Naidoo:
This job is really hard and it continues to get harder. But at this point, there's very little in the way of mental health support for the security leaders and for the security teams. So I really think that CEOs are going to start to double down on not just innovating for the business but also helping the CISOs to create both innovation for security, giving them the tools, the technology and the solutions to help them do their jobs better. But also supporting that with mental health and wellness support programs.
Producer:
Hello, and welcome to Security Visionaries hosted by Jason Clark, CSO at Netskope. You just heard from one of today's guests, Shamla Naidoo, Head of Cloud Strategy and Innovation at Netskope. In this episode, Shamla is also joined by Steve Riley, Field CTO at Netskope, Mike Anderson, Chief Digital and Information Officer at Netskope and last but certainly not least, David Fairman, APAC CSO at Netskope.
Producer:
As we welcome the New Year with open arms, security leaders around the world are continuing to try and stay five steps ahead of bad actors in the space. To kick off 2022, we brought together some of the sharpest leaders in the industry to share what predictions are top of mind on their risk radars. We hope you enjoy this round table discussion and from everyone at Netskope, we want to wish you a happy and healthy New Year.
Sponsor:
The Security Visionaries podcast is powered by the team at Netskope. Netskope is the SASE leader, offering everything you need to provide a fast data centric and Cloud Smart user experience at the speed of business today. Learn more at N-E-T-S-K-O-P-E.com
Producer:
Without further ado, please enjoy episode seven of Security Visionaries with your host, Jason Clark.
Jason Clark:
Welcome to Security Visionaries. I am your host, Jason Clark CSO at Netskope. And today I'm joined by some of the best experts in the industry and we're going to be talking about predictions and it's always a big topic this time of year. But we're going to try and bring to light some of these that we all need to be paying attention to for 2022 and beyond. First guest is, Steve Riley, great to have you here. How are you doing?
Steve Riley:
Thanks, Jason. How about yourself?
Jason Clark:
Doing super fantastic. And Dave Fairman, how are you?
David Fairman:
Hey, Jason, good to be here. Thanks for including me in your conversation this week. I'm doing well, mate. I'm doing really well. I'm looking forward to the Christmas and New Year break.
Jason Clark:
What time is it in Australia right now?
David Fairman:
2:00 AM in the morning. So I'm hoping my responses to this conversation will be eloquent considering the time.
Jason Clark:
Yeah. Thanks for staying up for us. It'll be awesome.
David Fairman:
Oh, good man.
Jason Clark:
And Shamla, how are you?
Shamla Naidoo:
Hey, Jason, thank you so much for including me in this fantastic conversation. I'm looking forward to it.
Jason Clark:
Awesome. And Mike?
Mike Anderson:
Hey, good morning. It's great to be here, looking forward to hearing some great predictions this morning on this podcast.
Jason Clark :
Well, perfect. Well, let's keep it lively and really just bring anything up you want to and comment on any of these as we go through, just so we'll make it fun for the audience. But again, everybody here as you'll see and you can look up, they're all amazing experts in the industry that I've known for a very long time. So the first thing I wanted to start off with is that, it's kind of a prediction but it's also very obvious. So I kind of call it a little bit of a softball but I bring it up because I'm worried not everybody's thinking about it. And that is that as we return to work and meaning, everybody was working from home and then your company says, "It's time to come back in the office three days a week or five days a week." And we're already starting to see this. A significant amount of people have either A, already moved but didn't tell their employer or B, are deciding, you know what? I like working from home and don't want to go back to the office. And with that, we're going to see a lot of attrition and turnover and that comes with insider threat. And when somebody decides to change jobs, they see their work product as their own. And when we see as, increase of over a 10X the downloads of information that they have touched or worked on. It could be anything from somebody that's on their sales to team, downloading all their customer lists so that they can take it to their next place. So just something that every security team should be thinking of, not just thinking about the external threats. So anybody have any thoughts on that one?
David Fairman:
No, I think that's a fair prediction. And we're talking about this is going to be the new era of resignation and people leaving the organization as well. I think we're going to probably see a little bit more of a rise in that activity at the moment. Now, I know we are talking about it here in region.
Jason Clark:
Yeah, I think that the place that catches people a little bit blind is just the use of really of all the personal apps, all the storage apps, et cetera. And a lot of organizations aren't inserting that information, that traffic into their existing insider threat processes. So I think that's the place I really recommend people look into. So to Shamla, I wanted to start with you and you have a long shot prediction around technology specific security vendors, are going to redefine, rebrand themselves as SSE vendors. Can you unpack that prediction and share your thoughts?
Shamla Naidoo:
Yes, absolutely. If you look out there today Jason, most of the cyber security vendors who provide products or services or tools are rebranding themselves as Secure Services Edge vendors and they're really pushing this idea of zero trust. And so what you have is, everyone who's doing things like securing or protecting files, protecting servers, protecting networks, acting as gateways, acting as data leak prevention tools. Everyone is branding themselves as a zero trust vendor securing the edge. And what that is doing, I think is going to create an enormous growing burden for the CISO, because now we've shifted the burden on what we actually do to this very generic term. And we are leaving it up to the consumer or the decision makers to determine whether or not these solutions address strategic gaps. Which gaps they address? What are the pros and cons? We also leaving it up to the consumer to decide, which ones they need versus which ones they can do without? And I feel like that is unfair to the industry because if everyone says they're zero trust vendor and there's no strategic or industry definition for what is included in zero trust and where is the edge, that just makes the job of the CISO much, much harder. Because really when you think about it, where is this edge that these SSE vendors are going to be addressing? It's everywhere. It's wherever we conduct transactions, it's wherever we conduct business. So the edge basically is everywhere. And we know from experience that not every provider, not every vendor can actually address all of the issues in those environments. And so that's why I think that as companies rebrand themselves, it's just going to increase the burden on the CISO.
Jason Clark:
You know what? I think same thing with SASE. As soon as SASE came out you started seeing, there's now 50 SASE companies. And everybody's just calling themselves SASE or they started just buying companies but with no integration and then saying, "Hey, we have all the parts and they all work together." And they really don't. But Steve you'd recently published something, Dave, you as well around zero trust and some good articles. Any additional thoughts on Shamla's prediction?
Steve Riley:
Yeah, I think it's important to remember what these topics are. SASE is intended to be an architecture and zero trust is intended to be a new way of thinking about assessing the trustworthiness of an interaction. Zero is a starting point, but ultimately there has to be some level of trust for any two entities to interact. And we don't want to just assume that you have full access to everything because of what your IP address is. And I really love the way that vendors who brought this are moving more toward a continuous adaptive trust approach, where you look at all these contextual signals and determine just how much access to grant, for just that interaction, for just that amount of time.
Jason Clark:
Yeah, I agree, and trust isn't binary. It's not on or off. And I think that a lot of those vendors that Shamla mentioned that they kind of still view that zero trust is like an ACL, that trust is you either have it or you don't. And a lot of vendors you ask them to define what zero trust even means and you're going to get different answers for most of them as well. So I think there it is, we are hurting the industry, there's a lot of confusion. David, so you had an interesting prediction around deepfake, and voice cloning and misinformation. So why is this a prediction that you're considering and how should users and companies be thinking about protecting themselves?
David Fairman:
Well, look, I think even this year, we started to see the rise of deepfake and probably even in the preceding year, the rise of deepfake as a tool for various nefarious reasons. Whether it was political influence or whether it was for increasing fraud and scams, social media social engineering for supporting social engineering attacks, et cetera. There was a couple of really good events that happened this year with regards to an energy company, which they had a fraud committed and the tool that was used to support that fraud was deepfake. And there was another unsuccessful attack on a technology company using the same capability. I think what we're starting to see now, is the fraud elements, whether it's identity fraud, business email compromise. Sorry, not business email compromise but sort of scams, and executives of social engineering and pressure put on employees to act under pressure. I think we're starting to see that now. I think that deepfake is something that helps adversaries, threat actors, fraudsters execute on those attacks. Particularly, when you think about scams and sort of those executive pressure type techniques that we see. We're also starting to see application fraud start to be... Or deepfake be seen as a vector to increase the success of the application fraud. We hear of things of ghost attacks or ghost fakes, which are purporting to effectively taking over the identities of people who are deceased and that has started. We're seeing a rise in financial crime in the financial crime world. So look, I think we started to see that in the past couple of years. I think it's only going to increase as we move into 2022 and into 2023. Deepfake technology is becoming more and more sophisticated, more and more accurate. I think it's hard for organizations to combat against that. You ask about how should organizations be thinking about this? What should they be doing to try and prevent that? Funnily enough, it comes back if you think about the fraud angle of this. It does come back to some of the basics of fraud prevention, things about validating that you know who you're talking to or who you are transacting with. Use mechanisms so that you are doing that authentication, all that validation out of band. Just don't trust what you see through an audio file as you talking to someone that you think you are talking to. How do you validate that training and awareness of your people? There's certain cues and signals that you can identify when you look at deepfake videos to identify those sorts of fraudulent video images or audio files. So there's a number of different things there. I think we're going to start to see... I mean, we basically saw it in the US elections around political influence with fake news. We've seen all of that. I think it's only going to grow. And I think social adversaries are going to start to really embrace this more so than they have today in terms of social engineering capability, which is again, going to lead to substantiative cyber attacks.
Jason Clark:
A good example of the executive pressure, we actually just saw this, I got alerted a couple weeks ago that a person in Europe got a phone call, and a text message and a phone call. And it was them acting like they are a CEO, telling this person to go take an action, "Immediately, urgent, I need you to take this action." And the person's kind of like, "Okay, well, will jump on a call." "Hey, I don't have time to jump on a call." And his response was, "Well, what's my favorite soccer team?" Or what's my favorite hockey team? And immediately the conversation went dead. But at first admittedly, this person was like, "Oh, I really thought he need me to go do that. I thought it was a weird request but I was ready to step into action." So the thinking and the awareness training kicked in and worked and we set to continue to do a better job of that.
David Fairman:
That's exactly what happened in the energy attack and that's successful. I think that led to about a 236,000 US Dollar fraud. So it will be there and we'll see more of it and I think proceeding those social engineering attacks. And I think political influence, and think about what that means not just... You talk about political influence but just from an influencing perspective. So think about how you can not just drive society or a subset of our community down a certain path. Think about how that could cause divide on an organization, on a private institution. If there was deepfake being used to send messages executives, around culture or around activity that we're doing, we could do it on a much smaller scale and we could start to influence and destabilize organizations. So maybe when it comes to destabilization of organizations that's the motivation for threat adversaries. I think we'll see a rise in that because it's just a tool to drive that lack of trust in the environment.
Jason Clark:
I think it's a great one, spot on. Steve? So another prediction that you've been talking about is organizations across the global start to measure their carbon footprint in relation to IT and their data centers. How likely is this? What percentage of organizations do you think will do this?
Steve Riley:
I think it's totally likely and it's going to affect 100% of organizations out there. We know that priorities from investors and stakeholders and forthcoming regulatory requirements are going to push organizations to improve the methods and processes they use to account for their carbon footprint. And I see that there's two dimensions here. There's a security dimension and infrastructure dimension. In the infrastructure dimension, folks are going to think, "What do I do in my data centers? How about moving them all to where you can get free air cooling like Iceland?" Open the windows, get free air cooling in your data center. Other things, if you can't pick up your data center to move, you could look at things like renewable energy credits, onsite generation technologies like cogeneration that combine cooling and power. Just make sure that you're getting good utilization out of the hardware that you have, is another way of reducing overall power consumption. And then for water specifically, I've seen instances of where orgs have moved from potable water from Maine's to using gray water sources as a way of reducing costs. Now that's for people who want to have on-premises data centers still. I would argue that maybe migrating to the cloud is another way of getting close to net zero emissions. The cloud providers have pretty strong financial incentives to develop energy efficient data centers. And they run with effectiveness that are much more so than what a typical organization might be able to do and they've got programs in place to have their operations that are carbon neutral. And even more so now, you can see like from Azure and GCP and AWS, where they publish the emissions from their data centers now. And organizations moving to the cloud could say, "Hey, we've reduced our emissions by going out this way." Now, I said there is a security dimension too. And it's interesting because I've seen some instances of where security risks are acquiring climate change dimensions. I'll give you a couple of examples, hacktivists. So these are attackers motivated by issues. They're targeting large enterprises with carbon footprints. So if you're a huge emitter, someone's going to come after you. And this is a brand new risk because this is just an enterprise doing business as usual but now they're getting attacked for some reason that they may find very difficult to understand why. But also think about how many of us work now. We're all distributed, we're at home and we don't have business continuity capabilities in our home offices. So a weather event could actually take a lot of home workers offline. So change and sustainability are part of boardroom and share order conversations now. So it's time to start thinking about this.
Jason Clark:
I think it's a good one Steve. I think a lot of listeners are probably like, "Oh, you know what? I hadn't really put that on my risk radar yet but let me write that one down because it is something that's probably going to come up at some point and we need to already be thinking about it." So Mike, I know you had some thoughts around autonomous cybersecurity and where you remove the human delay and policy management. Why is this a top of mind prediction for us going forward?
Mike Anderson:
When you think about any kind of technology, if you think about traditional plan, build, run concepts, there's a lot of focus that's been put traditionally on how do you reduce the run cost of any IT solution? So when we think about cybersecurity, the run is really when you get into security operations. And there's two reasons why I feel like this prediction will start to play out. One, is the actors get more and more sophisticated. Our risk increases based on the delay it takes for us to implement policy within our environments. So whether that policy is keeping people from going somewhere or going the wrong places or if it's coaching people to make other decisions in the moment. The faster we can get those policies instrumented in our environments, the quicker we can respond to the threats that are happening. And the challenge we've also got is the skill gap, especially in cybersecurity. And so trying to hire, retain people in security operation that's going to be problematic and we've already seen that. And so what happens in a lot of organizations is, you've got the person that is maintaining the platform, especially medium enterprises. That's maintaining the platform, doing the policy management, then that person leaves. So then there's this delay between when you've got policies being created. Now there's this air gap before the new person comes in, they get trained up and start administering the solution. And so just like we've seen in IT operations, where we've seen AIOps become kind of the new buzzword, that's going to move its way over into cybersecurity. And so it's going to help address both that skill gap problem and the problem we have around, what happens when people leave and who picks up the keys to the car and keeps running? And at the same time, helping us respond to threats more quickly. And so that's the piece. And I think what we're going to see first is more of, kind of your traditional, if we look at the sales automation vendors like Salesforce, where they predicted the next best action you should take from a sales standpoint. We have to first establish a trust. We have to know that if I'm going to turn on autonomous cybersecurity, I have to first trust the decision making process that's going into it. And so I think what we're going to see as a first step is, models that essentially suggest kind of here's policies we should create. And get a person comfortable enough to say, "You know what? 99% of the time I just click okay and just approve it." And that builds that trust level where someone says, "I'll go ahead and turn on that autonomous mode." And so I think we're going to see this multi-step journey but I think in the next three to five years, we're really going to start to see the autonomous cybersecurity being used as a way to reduce the run costs. And the other aspect is just financial. I talk to a lot of my peers in the industry and they say, "Look, my CFO is asking me, when do we get to a point where we can say a certain percentage of our revenue is going to go into cybersecurity?" And so we're going to see the same pressures that we've seen on IT and technology and honestly, every function in the company. We're going to start seeing those same pressures on security to say, "What is good enough?" And so when we get to that point, we're going to see optimization of our security stack and those tools we use. Automation is a way to, again, reduce that run cost so I can reinvest in more of my strategic priorities to head off the new threats like, how AI may be used by some of the bad actors?
Jason Clark:
Yeah, I'm curious from the team, is any of you already seen autonomous cybersecurity in either the software supply chain or in infrastructure security or cloud security in any way?
Shamla Naidoo:
So Jason, let me try to address that. There's a reason why you seeing all these shaking heads, where people are saying, "We haven't seen this." It's because so much of the responsibility for making those kinds of decisions falls on the security leaders. And given how personal the failure and the outcomes are, it's hardly any surprise that we not willing to let machines make these decisions. We don't just put one layer of humans to help make the decisions. We put multiple layers because of the fear of failure. And the fact is in machine learning and artificial intelligence algorithms, if you don't have some appetite for failure at the beginning for allowing self learning to get better and to improve, you're just not going to be successful. So the point is that we are not going to see autonomous security until we give security leaders a little bit of leeway to make mistakes and allow the machines to make some mistakes early on while we teach and learn and make better automated decisions.
Mike Anderson:
Essentially, I'd add on, earlier we were talking about zero trust and as we move away, I think Steve, you and I had this conversation before around the move from implicit trust to explicit trust, and from trust but verify to verify then trust. I think the same thing happens as we think about autonomous. That verification is, how do I know that it's going to make the right decision? And then once I know, then I can trust it. So I think it becomes pervasive as we think about the mindset. I tell a lot of people when they talk about zero trust, first thing I say, it's not a product or a destination because it's a journey. It's just like an agile mindset. It's a zero trust mindset, which is really shifting from, I'm going to verify then trust in the things I do. And the more I know about you, the more I'm going to trust you. And I think going back on that conversation from earlier, zero trust requires a, say, teamwork makes a dream work, right? So how do we get all solutions within our environments working together to help us understand as much as we can about a person or entity so that we can make the right decisions? And so I think that mindset becomes part of that AI journey as well for how we can move towards that autonomous cybersecurity model.
Jason Clark:
Yeah, you kind of have to bring it all together. You have to have a brain of your cybersecurity program before you can start applying that and right now there's so much disparate types of systems. I always use the analogy that we have like a disconnected nervous system. The sense of smell, and the sense of feel and sense of seeing are not connected, other than maybe in a SIM, which is memory versus the frontal lobe. So I think we have a lot of work to do, to be able to connect all these things together, which I think that's the intent with SSE and with zero trust. It's a journey to get these things connected so we can react and defend faster. And another topic that you all were talking about is APIs. And I see there are a lot of conversation around SAS being the fastest growing risk for organizations. And there's a lot of conversations around mobile being... Everybody is familiar with the mobile risks. But APIs, they don't get talked about enough and it is a fast growing risk and everybody's wanting to connect everything together. And so when we think about that from a future task surface, probably one of the fastest growing task surfaces potentially. And I'm curious, what are a couple of thoughts around what the risks are there? And then we'll talk about what people should be doing.
Steve Riley:
So let's talk about some data that maybe supports your idea that API techs are growing. According to Akamai, API requests comprise 83% of all their traffic now. They think it's going to grow 30% year-over-year and they're expecting 42 trillion API requests by 2024. Cloudflare says that API traffic grew 300% faster than web traffic in 2020. And a research note at Gartner showed that client inquiries related to APIs, including security management are increasing 3% year over year. So yeah, this stuff is on people's minds. That's true. Now, what do you do about it? Mostly JSON and XML payload processing and monitoring API usage thresholds are the things that you can take a look at, but that's only if you can find them. A lot of applications are plagued with shadow and zombie APIs and these could likely create vulnerabilities that may result in huge security incidents. Automation can help with this. There's the automated API discovery mechanisms are beginning to appear on the horizon. They rely on traffic pattern learning and they can also integrate with API definitions like Swagger and the OpenAPI Specification. And these two tools can help often provide a positive security model for APIs.But the cataloging, and validation, and testing and access control are only part of the solution here. It's also necessary to manage the consumption of internal and third party APIs. Now, one of the thing that I'll just make as note is that because APIs are becoming more and more prevalent, we are seeing web application, firewall providers add API protection and other API gateway features. In fact, 2020 was the final year Gartner published a Magic Quadrant for Waves. And the new MQ covering this space has expanded into DDoS protection bot management and API protection.
Jason Clark:
So now it's called, I don't know if ain't right, it's now it's W-A-A-F?
Steve Riley:
W-A-A-P.
Jason Clark:
A-A-P, okay. So how do you say that?
Steve Riley:
Web Application and API Protection, WAAP.
David Fairman:
WAAP, WAAP. I think you spot on Steve and I think it was Gartner that predicted that in 2022, anyway, that authenticated APIs were going to be the leading cause of attack or the leading attack vector. I think we're starting to see that too. You've rattled off some really, really good stats there. And let's think about it. We keep talking about digital transformation and we talk about as part of enabling digital transformation over the number of years now. We've been talking about the API economy so it's only natural that we're seeing this rise in APIs and that is really what's driving that traffic. I think to your point, it's funny how some of these fundamentals, I sort of mentioned it when we were talking about the deepfake piece around going back to some of the fundamentals of controls, and checks, and balances and fraud about verification. Well, you mentioned it yourself around discovery and understanding your inventory of APIs in your environment. That's still a basic fundamental control, isn't it? You can't protect what you don't know about. You can't secure what you don't know about. So I think some of that asset inventory, particularly on the API side. And you're right, there's some great capabilities coming out there, emerging on the market nowadays, in terms of API discovery and security. But I think there's another piece that we need to talk about when we talk about API security, what do we actually mean there? API are really here to facilitate business logic. So I think there's two elements, there's pieces around is this API secure? Or have we seen a change in this API such as, it was never calling or delivering personal information or some form of data type before, that has now changed? So let's make sure we've got the right controls in around that.
David Fairman:
But what about the behavior of that API and how does that support the overall business logic? And now we start to see a change in the way that API is behaving. And what does that mean from how it's drifted from the business purpose of that API? And those things then themselves become this indicator of a potential attack on that API itself. And we're seeing some of these companies that you alluded to that are really looking at the business logic side to model API behavior. I think that's going to be really, really critical.
Shamla Naidoo:
Hey, and Jason one thing I would add to this conversation is a few years ago, there was a rush to free. And what we saw is that the customer free was you giving up privacy and seclusion. Right now, we are seeing a rush to open. And this whole idea of a rush to open APIs is great for the ecosystem to invite more and more people to build on your capability, to allow you opportunities to add to revenue. But the question really is, what's the cost of APIs? And what's the cost of that open API? And I feel like, David said this, we're coming right back to the cost of this open API economy, is going to be authentication and access control. Because what we are really doing by creating open APIs, we are allowing more people to do business with us but embedded in that community are going to be the criminals. So we back to this whole idea of, you still have to go back and double down on access control and authentication even in this API economy, in this open economy.
Mike Anderson:
Yeah, I think there's a thing I would add on to this, is there's been this move... You look into just IT in general, so first off, let's talk about internal APIs. Where a lot of companies look at it from, "I've got external APIs that I publish and make accessible to my trading partners. Then I've internal APIs that is the building blocks for application." The buzzword two or three years ago was, micro services architecture, where I basically take an entire application, expose it via an API and that is a self-contained application. Now there's pure purist out there, there's reality. A lot of companies say microservices still have layers that are not fully decoupled from other applications. Now you have composable architectures, which is where I'm taking business applications, making them consumable as APIs as well. And so I think we have to think about how do we make sure we discover both the external APIs as well as those internal APIs. And it starts with education around, what are the fundamentals around how I build those APIs? Now shifting it to the external side, there's the consumption element. And a lot of times what I hear is people are concerned about, "Can I trust that API I'm consuming from that third party? What do I know about them? Because there's so many of them that I can leverage." Some of those you may think that they're provided through a marketplace through a hyperscaler that's out there, one of the cloud providers. But is it really provided by them or is it simply a marketplace application? And can I trust that marketplace application and the API that's being created? And so I think we have to start thinking about APIs provided by external parties and our third party risk conversation. And not only can I trust the party but can I trust the data that's coming in? Because often what we're seeing is people consuming APIs and putting that data into data lakes that were then building machine learning algorithms on top of those data lakes to make decisions. And so if I don't know enough about the third party is the data I'm ingesting, is it trustworthy? And is it now going to impact my machine learning that I'm using to make decisions across my organization? So I think we really have to think about not just our own environments but the APIs we're consuming from other parties and start to think about that from a risk management standpoint.
Jason Clark:
Well, you just hit another point. I mean, a huge part of the future of security is models and it's leveraging AI on how to help us do our jobs. And when Dave talked about the behavior of APIs, then we talked about the autonomous side of Cyber security. You think about everything we just talked about, ML is the enabling factor to those. But then Mike, you kind of just alluded to, but can we trust everything that comes in? And then you're building models around things you don't trust. So what do you see in the future for maybe one or two more minutes here about the risks of ML and how do we manage that?
David Fairman:
So look, I'll take that because I think this has been quite a pet topic for me for a couple of years now. And I've been involved in some very innovative, early startups that are looking to try and combat this problem. But if we talk about some of the risks, we think about data poisoning, we think about bias, we think about model robustness. So let's break them down. If you think about at the very basics, and I'm not going to explain this as well as any educated qualified data scientist, but ultimately our models are based on data. There's a set of training data that we know the outcome of, that we use to train that model to predict the future. That training data can be manipulated in multiple ways. It could be poisoned. It could be the integrity of the training data could be suspect. There could be false data inserted into the training data sets. And something that Shamla mentioned about, the race to open. A lot of training data sets are open source data sets or freely available to train, well, just like anything else that's open. What about the security of those trainings or the integrity of those training sets? They themselves can be compromised or the integrity of those can be compromised, which in turn will skew the results of that learning model. And then you've got sort of the robustness and the bias element. Models are trained by humans, humans naturally have bias themselves. So how do we start to try and identify that and predict that? And then you think about the pervasiveness of models in automation. The point of having machines do activities so that we can scale and we can do things really, really fast. If that model is now not behaving the way we want it to do, they can have a very pervasive impact on an organization in terms of the outcomes, and the activities and the decisions that are being driven through that. So how do we wrap some governance and controls around that to make sure that, that model is behaving the way we expect it to behave?
Mike Anderson:
Yeah, I think I would add on Dave. I think I'll go back to a couple of things you've said, where you talk about deepfakes. If you really think about where we're at today, we say, AI ML, we're really just a machine learning today. Still a human programming and algorithm based on data. We haven't really got to the real artificial intelligence point where computers are making their own decision and creating their own algorithms based on what they observe in the environments. And I think when we get to that point, I think that how do we safeguard against that fake data? If a computer is going out and you think about what happened, like you mentioned earlier about in political elections and how fake news basically guided a different decision. Well, how will a computer if we turn on true artificial intelligence, if it's machine learning from what's in the environment, how do we keep that fake news, that fake information from influencing the decisions that a computer is going to make? Is it truly becomes autonomous in artificial intelligence? And I think that's where there's a lot of conversations that'll be heard around, what are the ethics of artificial intelligence? And how do we make sure that we eliminate that fake data? So I think the prediction you made earlier about that deepfake is extremely important because it's going to influence the direction we go from an AI ML standpoint in the future.
David Fairman:
Yeah, it's funny. Those two issues are very connected, very connected and one will influence the other.
Jason Clark:
So just to kind of wrap us up, Shamla, you had a closing thought for the listeners and then and then we'll be finishing up here.
Shamla Naidoo:
Yeah, my last prediction Jason, is that what we are going to see is, CEOs in particular creating programs to support the mental health and wellbeing of the security leaders. Let me add a little bit of color to that comment. In 2019, very, very early on, in 2019 and this is pre-pandemic, Forbes actually ran a survey and one in six CISOs said that they turned to medication and alcohol to deal with the stress of the job. Now just think about the impact of that? One in six voluntarily shared that they turn into medication and alcohol to deal with the stress of the job. That suggests that number is much higher for the people who didn't disclose. So the point is, I think that this job is really hard and it continues to get harder. But at this point, there's very little in the way of mental health support for the security leaders and for the security teams. So I really think that CEOs are going to start to double down on not just innovating for the business but also helping the CISOs to create both innovation for security, giving them the tools, the technology and the solutions to kind of help them do their jobs better. But also supporting that with mental health and wellness support programs because I don't know that you can do this on their own. You just think about it, most of us didn't sign up to this industry for the purpose of protecting national security. We signed up to these industries and these jobs to protect our companies. What we now see is an extension of the CISO's remit into protecting the national security of our countries without the appropriate support, without the appropriate training. We having to be on the front lines of national security and the economies of our countries and the globe, that creates enormous pressure for the CISO. And I think with that needs to come, not just funding and the support that we be seeing today, but it needs to come with mental health and wellness support because this job is probably the most stressful in the C-suite today.
Jason Clark:
Absolutely, it is no doubt. It's an extremely, extremely hard job and very stressful. And I think that the shining light of it, is though that it is such an amazing community. We share, we're connected. So I think part of it is just, is talking. It's talking about the stress with people that understand. But I do think we should sped a lot more time thinking about, how do we help everybody from a mental health standpoint? It's something Shamla that we could even potentially talk to the Security Advisor Alliance about. I know there it's been hyper focused on getting talent into the industry and helping kids get into the industry.
Shamla Naidoo:
I agree. I think this topic deserves a lot more attention, I think it a lot more research, a lot more conversation because we have to recognize that the stress in the job is built into the job. It's not because we are not up to the job. And so those conversations really need to come to the surface if we plan to retain the talent that we have in the industry and attract more talent to the industry.
Mike Anderson:
Yeah, I think I just want to add one comment on this because mental health is a area that I feel strongly about. I think in general, we need to make sure that people feel okay to talk about mental health in the workplace because it's something that impacts all of us, especially during the pandemic, we've all felt it at times at different levels. And we need to just make mental health in general something that's okay to talk about and we need to make it more human. We've spent so much time trying to take the human element out of the workplace. But I think we have to bring some of that human element back and make sure that we're leading with empathy with our people, and we're making it okay for people to talk about mental health and how they're doing from that standpoint. It's all just like we do about physical health, mental health needs to have the same type of focus.
Jason Clark:
So that's all we have time for, this has been amazing. Thank each of you for just brilliant insights and just kind of open conversation and sharing your thoughts out there for all the listeners. And again, thank for all the listeners for continuing to download and thanks to awesome people coming and joining and talking with us. And just again, a great community, great industry. So everybody have a good rest of your day. Thank you.
David Fairman:
Thanks Jason too.
Steve Riley:
Thanks everyone.
Sponsor:
The Security Visionaries podcast is powered by the team at Netskope. Looking for the right cloud security platform to enable your digital transformation journey, the Netskope Security Cloud helps you safely and quickly connect users directly to the internet from any device to any application. Learn more at N-E-T-S-K-O-P-E.com.
Producer:
Thank you for listening to Security Visionaries, please take a moment to rate and review the show and share it with someone you know who might enjoy it. Stay tuned for new episodes releasing every other week and we'll see you in the next episode.