Netskope named a Leader in the 2024 Gartner® Magic Quadrant™ for Security Service Edge. Get the report

close
close
  • Why Netskope chevron

    Changing the way networking and security work together.

  • Our Customers chevron

    Netskope serves more than 3,000 customers worldwide including more than 25 of the Fortune 100

  • Our Partners chevron

    We partner with security leaders to help you secure your journey to the cloud.

Still Highest in Execution.
Still Furthest in Vision.

Learn why 2024 Gartner® Magic Quadrant™ named Netskope a Leader for Security Service Edge the third consecutive year.

Get the report
Netskope Named a Leader in the 2024 Gartner® Magic Quadrant™ for Security Service Edge graphic for menu
We help our customers to be Ready for Anything

See our customers
Woman smiling with glasses looking out window
Netskope’s partner-centric go-to-market strategy enables our partners to maximize their growth and profitability while transforming enterprise security.

Learn about Netskope Partners
Group of diverse young professionals smiling
Your Network of Tomorrow

Plan your path toward a faster, more secure, and more resilient network designed for the applications and users that you support.

Get the white paper
Your Network of Tomorrow
Introducing the Netskope One Platform

Netskope One is a cloud-native platform that offers converged security and networking services to enable your SASE and zero trust transformation.

Learn about Netskope One
Abstract with blue lighting
Embrace a Secure Access Service Edge (SASE) architecture

Netskope NewEdge is the world’s largest, highest-performing security private cloud and provides customers with unparalleled service coverage, performance and resilience.

Learn about NewEdge
NewEdge
Netskope Cloud Exchange

The Netskope Cloud Exchange (CE) provides customers with powerful integration tools to leverage investments across their security posture.

Learn about Cloud Exchange
Netskope video
The platform of the future is Netskope

Intelligent Security Service Edge (SSE), Cloud Access Security Broker (CASB), Cloud Firewall, Next Generation Secure Web Gateway (SWG), and Private Access for ZTNA built natively into a single solution to help every business on its journey to Secure Access Service Edge (SASE) architecture.

Go to Products Overview
Netskope video
Next Gen SASE Branch is hybrid — connected, secured, and automated

Netskope Next Gen SASE Branch converges Context-Aware SASE Fabric, Zero-Trust Hybrid Security, and SkopeAI-powered Cloud Orchestrator into a unified cloud offering, ushering in a fully modernized branch experience for the borderless enterprise.

Learn about Next Gen SASE Branch
People at the open space office
Designing a SASE Architecture For Dummies

Get your complimentary copy of the only guide to SASE design you’ll ever need.

Get the eBook
Make the move to market-leading cloud security services with minimal latency and high reliability.

Learn about NewEdge
Lighted highway through mountainside switchbacks
Safely enable the use of generative AI applications with application access control, real-time user coaching, and best-in-class data protection.

Learn how we secure generative AI use
Safely Enable ChatGPT and Generative AI
Zero trust solutions for SSE and SASE deployments

Learn about Zero Trust
Boat driving through open sea
Netskope achieves FedRAMP High Authorization

Choose Netskope GovCloud to accelerate your agency’s transformation.

Learn about Netskope GovCloud
Netskope GovCloud
  • Resources chevron

    Learn more about how Netskope can help you secure your journey to the cloud.

  • Blog chevron

    Learn how Netskope enables security and networking transformation through security service edge (SSE)

  • Events and Workshops chevron

    Stay ahead of the latest security trends and connect with your peers.

  • Security Defined chevron

    Everything you need to know in our cybersecurity encyclopedia.

Security Visionaries Podcast

How to Use a Magic Quadrant and Other Industry Research
In this episode Max Havey, Steve Riley and Mona Faulkner dissect the intricate process of creating a Magic Quadrant and why it's much more than just a chart.

Play the podcast
How to Use a Magic Quadrant and Other Industry Research podcast
Latest Blogs

Read how Netskope can enable the Zero Trust and SASE journey through security service edge (SSE) capabilities.

Read the blog
Sunrise and cloudy sky
SASE Week 2023: Your SASE journey starts now!

Replay sessions from the fourth annual SASE Week.

Explore sessions
SASE Week 2023
What is Security Service Edge?

Explore the security side of SASE, the future of network and protection in the cloud.

Learn about Security Service Edge
Four-way roundabout
  • Company chevron

    We help you stay ahead of cloud, data, and network security challenges.

  • Leadership chevron

    Our leadership team is fiercely committed to doing everything it takes to make our customers successful.

  • Customer Solutions chevron

    We are here for you and with you every step of the way, ensuring your success with Netskope.

  • Training and Certification chevron

    Netskope training will help you become a cloud security expert.

Supporting sustainability through data security

Netskope is proud to participate in Vision 2045: an initiative aimed to raise awareness on private industry’s role in sustainability.

Find out more
Supporting Sustainability Through Data Security
Thinkers, builders, dreamers, innovators. Together, we deliver cutting-edge cloud security solutions to help our customers protect their data and people.

Meet our team
Group of hikers scaling a snowy mountain
Netskope’s talented and experienced Professional Services team provides a prescriptive approach to your successful implementation.

Learn about Professional Services
Netskope Professional Services
Secure your digital transformation journey and make the most of your cloud, web, and private applications with Netskope training.

Learn about Training and Certifications
Group of young professionals working
Post Thumbnail

It’s predictions season once again, and to mark the occasion, host Emily Wearmouth sits down for a conversation with Sherron Burgess, Senior VP and CISO for BCD Travel, and Shamla Naidoo, Head of Cloud Strategy and Innovation at Netskope, to talk about the hot topics they’re seeing for the year to come. Sit back for a rousing conversation about the changing relationship between CISOs and boards, the evolving world of cyber skills, the impending rise of AI regulations, and continuous adaptive zero trust.

The thing that I’m always worried about from an AI standpoint is the implications societally and what that may mean. I’m concerned that society will lose its responsibility to check machines and to really understand what’s real and what isn’t. And so I think that’s the ethos that we have to think about as we go into this new world and the promise of what AI is.

—Sherron Burgess, Senior VP and CISO for BCD Travel
Sherron Burgess, Senior VP and CISO for BCD Travel

 

Timestamps

*(0:01): Introduction*(14:03): Proposed AI regulations will face more scrutiny in 2024
*(1:30): Shamla Naidoo's Prediction: Changing relationship between CISO and the board*(23:28): 2024 will see the realization of continuous adaptive zero trust
*(9:54): Sherron Burgess’s Prediction: The evolving conversation about cyber skills*(28:08): Closing

 

Other ways to listen:

green plus

On this episode

Sherron Burgess
Senior VP and CISO for BCD Travel

chevron

Sherron Burgess

Sherron Burgess currently serves as the Senior Vice President and Chief Information Security Officer of BCD Travel as well as the VP of Strategic Development at Cyversity.

Sherron was instrumental in developing and implementing BCD Travel’s Global Security program, which supported 27.5 Billion in sales across 109 countries. Sherron also championed and supported the implementation of various industry certification efforts, including ISO 27001:2013, TISAX – VDA, SOC 2 Type 2 Compliance, PCI DSS Merchant Compliance, NIST 800-171, and ISO 9001.

In addition to their work in Cybersecurity, Sherron also co-championed and led the implementation and execution of BCD Travel’s Global Diversity & Inclusion program. Sherron also analyzed, strategized, and enabled the implementation of government security requirements to NIST 800-53 to meet government defense contractor requirements.

Sherron has a proven track record of successfully leading complex global initiatives. Sherron has a deep understanding of the ever-changing landscape of Cybersecurity and is constantly looking for ways to innovate and improve upon existing programs. Sherron is a respected leader within the industry and is known for their dedication to their team and their commitment to excellence.

Sherron Burgess has a Master of Science from the Georgia Institute of Technology in International Affairs, a Bachelor of Arts from Siena College in Spanish, and has completed LinkedIn courses in “Confronting Bias: Thriving Across Our Differences,” “Diversity and Inclusion in a Global Enterprise,” and “Body Language for Leaders.”

LinkedIn logo

Shamla Naidoo
Head of Cloud Strategy and Innovation at Netskope

chevron

Shamla Naidoo

Shamla Naidoo is a technology industry veteran with experience helping businesses across diverse sectors and cultures use technology more effectively. She has successfully embraced and led digital strategy in executive leadership roles such as Global CISO, CIO, VP, and Managing Partner, at companies like IBM, Anthem (Wellpoint), Marriott (Starwood), and Northern Trust.

Emily Wearmouth
Director of International Communications and Content at Netskope

chevron

Emily Wearmouth

Emily Wearmouth runs Netskope’s communications across EMEA, LATAM, and APAC. Working across public relations, social media, customer references and content creation, Emily keeps busy unearthing stories and telling them in a way that helps customers and prospects understand what Netskope can do for them.

LinkedIn logo

Sherron Burgess

Sherron Burgess currently serves as the Senior Vice President and Chief Information Security Officer of BCD Travel as well as the VP of Strategic Development at Cyversity.

Sherron was instrumental in developing and implementing BCD Travel’s Global Security program, which supported 27.5 Billion in sales across 109 countries. Sherron also championed and supported the implementation of various industry certification efforts, including ISO 27001:2013, TISAX – VDA, SOC 2 Type 2 Compliance, PCI DSS Merchant Compliance, NIST 800-171, and ISO 9001.

In addition to their work in Cybersecurity, Sherron also co-championed and led the implementation and execution of BCD Travel’s Global Diversity & Inclusion program. Sherron also analyzed, strategized, and enabled the implementation of government security requirements to NIST 800-53 to meet government defense contractor requirements.

Sherron has a proven track record of successfully leading complex global initiatives. Sherron has a deep understanding of the ever-changing landscape of Cybersecurity and is constantly looking for ways to innovate and improve upon existing programs. Sherron is a respected leader within the industry and is known for their dedication to their team and their commitment to excellence.

Sherron Burgess has a Master of Science from the Georgia Institute of Technology in International Affairs, a Bachelor of Arts from Siena College in Spanish, and has completed LinkedIn courses in “Confronting Bias: Thriving Across Our Differences,” “Diversity and Inclusion in a Global Enterprise,” and “Body Language for Leaders.”

LinkedIn logo

Shamla Naidoo

Shamla Naidoo is a technology industry veteran with experience helping businesses across diverse sectors and cultures use technology more effectively. She has successfully embraced and led digital strategy in executive leadership roles such as Global CISO, CIO, VP, and Managing Partner, at companies like IBM, Anthem (Wellpoint), Marriott (Starwood), and Northern Trust.

Emily Wearmouth

Emily Wearmouth runs Netskope’s communications across EMEA, LATAM, and APAC. Working across public relations, social media, customer references and content creation, Emily keeps busy unearthing stories and telling them in a way that helps customers and prospects understand what Netskope can do for them.

LinkedIn logo

Episode transcript

Open for transcript

Emily Wearmouth [00:00:01] Hello and welcome to another edition of the Security Visionaries Podcast. Audio stimulation for anyone in the cyber, data, or related industries. I'm a fill in today because I've been asked to host this conversation for my co-presenter Max who has been struck by a virus. And for clarity, I don't mean malware and Max is not an AI. He is a real boy. And as well as get well soon. I have to say that I'm really pleased that I've nudged my way into this one because today we're pulling out our crystal balls in the annual tradition of making predictions for the year ahead. And I have two guests in the hotseat, both of whom are prepared to go on record and make some predictions for us. First up, we have Shamla Naidoo. They do. Shamla has served as CISO for the likes of Starwood Resorts and IBM, and she's also an adjunct professor of law at the University of Illinois. She serves on multiple public boards, and she's also head of cloud strategy and innovation at Netskope. Welcome. Shamla.

Shamla Naidoo [00:00:54] Thank you, Emily. I'm glad to be here.

Emily Wearmouth [00:00:56] My second guest is Sherron Burgess, also a CSO at a very large organization and also a board member. And Sharon may be familiar to listeners for the work that she does championing diversity in our sector. And in fact, one of her board positions is with Cyversity, a not for profit organization that's dedicated to increasing the presence of underrepresented minorities within the cybersecurity field. Really important work. Welcome, Sherron.

Sherron Brugess [00:01:19] Thank you, Emily. Glad to be here.

Emily Wearmouth [00:01:21] So without further ado, we're going to dive right in. Polish up that crystal ball. Shamla hit us up with a prediction to get us going. What's something we should expect to see in 2024.

Shamla Naidoo [00:01:30] You know, Emily, for many years, CISOs have been reporting into boards. But I think what we're going to see is a change in that relationship. So up to now, CISOs have come into the boardroom mostly, not all the time, but mostly as a subject matter expert. The relationship was largely owned by other executives like the CIO or the chief risk officer or even the general auditor. And so the relationship was owned by those leaders while the subject matter came from the CISO. In more recent months, what we're seeing is, you know, the SEC is just one catalyst. But generally speaking, I think boards are beginning to understand that their role is very different to the CFO role. The CISO is an operational leader on the ground. And so I think what you're going to see is boards are going to lean more on the CISO. They're going to want that relationship because the CISO is the person with the information, the updates, the knowledge and also a good state of health and welfare of the organization as it relates to cybersecurity. So, you know, as I think about it, a related change that we're going to see in the industry is, you know, boards are covered by directors and officers insurance for the work they do in the scope of that role. And what we seeing right now is CISOs are beginning to ask for coverage, insurance coverage and other protections like indemnity for doing their jobs, because we think CISOs come under investigation for the results in their organizations. What I think you're going to see is boards are going to understand that request from the CSO because they have it. And where CISO is not automatically an officer, they're going to want to have some kind of endorsement on their policy to include them for investigations when they're doing their jobs. So I think those two things are going to come together really nicely where the board is going to want that relationship. They also understand the ask from the CISO for insurance coverage and indemnity for doing their jobs, that's what I'm seeing that's imminent with respect to those relationships.

Emily Wearmouth [00:03:44] It sounds Sherron, I'm interested to get your thoughts. It sounds like there's almost a point of tension where CISOs perhaps want more of a relationship directly into the board, but maybe they're being held back at the mine because they're not afforded the same protections that the other board members have in these insurances. Is that something that you've come across or conversations you've had with your peers that it factors in at all?

Sherron Brugess [00:04:05] Yeah, I really think it does. I mean, I think what we're seeing over the past year, year and a half, we've seen CISOs now on the chopping block associated with hacks and and breaches. And so, you know, in a lot of cases, it's not necessarily the negligence of the CISO, it might be the decision making of the board as a whole in terms of the risks that the business is assuming. And so the CISOs need to have, you know, among the community of peers of CISOs, we're talking about how do we do our jobs, how do we advise and consult with our businesses, with our boards, etc., but still also make room for what the business wants to do and maintain our integrity as professionals. And so I think in order to do that, and instead of just trying to trim what we, the guidance that we provide to be able to come to work fully doing our jobs as professionals, there needs to be a little bit more protection. Should in place. So, yeah, I am seeing that. And you know, obviously the situation with Joe Sullivan and seeing what's going on in all cases, just CISOs are on heightened awareness here.

Emily Wearmouth [00:05:13] Are there any big milestones we should look out for that show that we're making progress towards sort of what you're predicting here? Will there be big movements from particular insurance companies? Will there be changes in the regulations for how governments require boards to organize and protect their members? Or is it just something that we're going to see slowly filter in in less tangible ways?

Shamla Naidoo [00:05:33] You know, I think big milestones that we're going to see would be, one, CISOs beginning to ask questions on am I an officer and am I covered by the company's directors and officers insurance and other kinds of protections? Well, if you're not an officer, you're going to find out. And if you are an officer, you're also going to find out that you have coverage for doing your job. As soon as you step outside of the boundaries of your job, then you're likely to get into trouble because coverage won't protect you for doing things that are less than legal or things that are outside of the scope of your duties. So what you're going to find is CISOs are going to ask, am I an officer? They also are going to ask questions like, what is the scope of my decision making authority? And if it's not in my decision making authority and somebody else needs to make the decision because I don't get coverage for decisions that I make that are outside of the scope of my duties. And so I think you're going to see a lot more clarity on CISOs, their roles, their decision rights. And then you're going to get a little more clarity on what is covered, what's in, what's out, which insurance carriers exclude the investigation of the CSO, and they just cover the cybersecurity breach and the forensics and the response. And so I think you're going to see shifts in company executives in how they approach this. You're going to see a shift in the industry, especially in the insurance industry, creating clarity for what is in and what's out. And I think CISOs are just going to be the beneficiary of these shifts. And so the milestones are going to be do are CISOs, you know, do they feel like they're properly supported? And they do they feel like their mental health is being taken care of because they have this clarity, they have the protection and they have the opportunity and the space to do their jobs and do a really good job. Versus today, what you're seeing is there's a lot of anxiety. There's a there's a healthy dose of stress in the organization because so much of this is unknown.

Sherron Brugess [00:07:45] And maybe can I add to that? Because, you know, it's like mental health. It's funny. When we talk about mental health for a CISO, it's like, that doesn't exist, right, when you have that question. Yeah, I always get these fun questions from vendors like what's keeping you up at night? It's like everything keeps me up because there's so much going on. You know, I think what's happening here also in the space around some of the milestones. Yes, I think the the officers insurance and being named as an officer and being covered under that. But I think other nuances may include making sure that you have the right to your own legal representation. So in a lot of cases where you have officers related insurance, it's well, as long as you're going to, you know, kind of toe the line with your company, that will cover you. And so I think that those kind of additional nuances will be important. I also think you'll start seeing more contracts and more contracts with, you know, kind of employment contracts, are these things being explicitly outlined in contract? It's one thing to be named in the insurance, but I think the other part is, you know, as practitioners requesting copies of that insurance and having that explicitly written in your employment contract is super important as well. I also like how you mentioned that decision making authority. I think that's super important. You know, if this is outside of my decision making authority, then, you know, I'm not covered here, so I'm not going to to make that decision. I think also the ability to escalate, you know, if in the case where, you know, there's some kind of legislation and SEC rules that are saying, well, you have the right to or you're obligated to report on events, even if that's not required by your board or your company is saying that that's not something you should do. These abilities to escalate without without damage or without penalty is going to be really interesting as well.

Emily Wearmouth [00:09:40] So I didn't know when we were going to talk about predictions that it would be so immediately helpful to our listener. I think I think this one's been touched on some some really interesting points. A great one. Shana, and now over to you, Sherron. What's your prediction for us for 2024?

Sherron Brugess [00:09:54] Yeah, So my prediction would be that there's going to be a race for more qualified talent and this growth of the cyber workforce. Over this past year, the White House has rolled out a strategy for workforce development. There's this call for public private partnerships to be able to grow cyber talent. And now which is which is really interesting, this idea that every individual and I would say America, but every individual needs to have some basic cybersecurity or security related knowledge. I think that's going to be super interesting in that it now informs universities and schools, even K through 12 institutions, that students need to be cyber aware so that they can contribute to general society.

Emily Wearmouth [00:10:44] When you say K through 12, just we've got some international listeners. What age group is that?

Sherron Brugess [00:10:48] From the time you start school primary school all the way through your younger years and then into university and the like. So we'll start seeing a lot more of that embedding of security related skill set as part of your education, formalized education in primary school and the like. But then also on the corporate side, what's going to be interesting is, you know, historically when there's this, you talk about all these jobs that are available. You talk about, you know, that we need to have more talent to fill these jobs. Historically, we have been swapping resources. So you see individuals hopping from company to company, and companies have not been investing in developing their own talent pipelines, often looking to offshore or near shore their resources to not have to develop that talent. I think what we'll start to see is that there is going to be greater investment in, you know, bringing people from other industries into cybersecurity. So those career transitioners also looking to develop more from those university and college pipelines or younger aged and also starting to bring talent from nontraditional backgrounds. So you'll see not only diverse backgrounds, but you'll also see, I think, a growth in junior college or these two year or technical school related trainings that will start to matriculate into the industry.

Shamla Naidoo [00:12:28] I think what Sharon did with a prediction is she just created a call to action as well as provided the background and some of the industry changes that are going to happen to support this prediction. And so that's, I think, very astute. Thank you, Sharon, for doing that, because we do need to talk about this talent conversation. And frankly, for a long time, we have been saying that, you know, cybersecurity is a team sport. We use you know, we use analogies like, you know, everything is not all about the cybersecurity leadership team. The whole organization needs to be involved, etc.. But I think your prediction is that call to action for how to operationalize is how are we going to actually go, you know, deal with whether you, a young kid starting school, you in university, you are an early professional, you are late professional, you are an executive. At all levels, there needs to be a basic level of understanding of this topic. I think about this, as you know. Yes, it's a prediction and it's a very astute prediction, but it's a call to action for let's all mobilize to make this real, because we absolutely need to. And it's no longer sufficient to say, well, we're going to be out 4 million jobs by 2030. Those kind of metrics are not helpful because we've been saying that for the past five years or even closer to a decade, we haven't made huge strides. So we now have to change our tactics in order to get even close.

Emily Wearmouth [00:14:03] Right now I've done my homework. I have a bit of a festive goodie bag here, and it's filled with predictions from some of the next groups bench of subject matter experts. Before he got ill, Max, my co-presenter, was walking the corridors and haunting the Slack channels, cornering people into predicting the future. So let me have a little metaphorical rummage. Here's a good one. I've picked one from Neil Thacker, our CISO from the EMEA region, and he predicts, I think, quite safely. To start with, the proposed AI regulations will come under scrutiny in 2024, and I do hope he's right there. But interestingly, he wonders whether any country will attempt to steal a march competitively by deliberately delaying their own regulations in order to get a head start in this AI arms race. And I think it's a cheeky one because half of it, as I say, is is less of a prediction and it's more of a musing really. And I'm interested to see how that strikes you and really precisely, and you might answer a general question, but if you can be more precise, where do you expect to see the swift moves to regulate and which markets do you think are giving indications they might be a little bit slower? I'm going to ask that one to you first, Sherron.

Sherron Brugess [00:15:09] Yeah, I mean, I agree. I think that there is going to be it's going to be interesting, really, because I think people are still trying to understand what the you know, the promises of, you know, I think right now it's very much buzz word. You know, people are saying, oh, it's like you should get eight and it fixes all of your problems and everything's going to be fine. Right. And so I think, number one, there's just a lack of what is AI and a definition of what it is. And you know what people propose that it's going to do. You know, I think it's a really interesting point around, you know, are they going to delay regulation to see if it really does play out? You know, it's also interesting to find that if they imposed regulation to slow everybody down so that they can keep on pace. So everybody is developing, iterating, at the same time. What I'm seeing is, you know, there's a lot of working groups that are popping up to just give some ideas and some musings about what's going on. I think the White House or the Office of Science and Technology and Policy here in the U.S. has rolled out this AI bill of Rights, and you're seeing a lot of existing practitioners, CISOs, you know, developing these kind of acceptable use policies and, you know, just trying to put some guardrails around it. But I think the first go out, people are going to try to do as much as they can within the limits. And I actually happen to have this opportunity sitting in the role of a global CISO and, you know, what are some of the areas that I would expect or countries that I would expect to pop up from a regulatory standpoint? You know, of course from the the background of GDPR, I'd be very interested to see what Germany is going to do. They're usually very first in terms of privacy rights and and the like. And I think this idea of AI and, you know, some of the concepts around discriminatory practices or, you know, the idea around privacy and the rights to be forgotten in the like, it will be very interesting to see what some of those already regulated countries are going to do in this space. But I think my first go would say that everyone's going to try to go as fast as they can and determine all the respective use cases before they slap in some of that some of that regulation to slow everybody, I don't want to say slow everybody down. But I think once it gets abused on a larger scale, I think people are going to try to iterate as fast as they can on it.

Emily Wearmouth [00:17:38] What about you, Shamila? What are your thoughts on that prediction?

Shamla Naidoo [00:17:40] So I agree with you and I agree with everything she said. And I think that, you know, Neil's prediction is very robust. Here's why these industrialized countries who are also privacy advocates and, you know, protectionists in many ways, I think what you're going to see is them kind of overotate because they want to be first out of the gate with regulation and look at us and look at our discipline and look at our structure, etc.. I think the flipside of that is where I come down, which is, you know, but now you've just stifled innovation. You've also created an opportunity to break in your own society while others might be, you know, going ahead really fast. And so we have to be careful that we don't stop the great opportunities that AI could give us. You know, I like the concept of do no harm, and so if we can create a social conscience of do no harm, however commercially viable your solution might be. I think that's probably the starting point. And then, you know, start to get too nuanced with you can do this, but you cannot do that will create this competition of okay, well, you know, while the industrialized world starts to kind of vacillate on this construct of regulation, others will start to leverage the technology to get better outcomes, both commercial and social outcomes. And we might get left behind. And so I worry that as we think about this, we need to put regulation in context. Regulation is to regulate for bad outcomes. And frankly, we don't know the bad outcomes yet. We've seen some uncomfortable and inconvenient outcomes, but we haven't seen any catastrophic outcomes. And so what I what I worry about is when you hit the brakes before you actually know you have an accelerator pedal, you really don't know how hard to hit that brake. And we need to think harder about always stifling innovation, not just in our own countries and communities, but are we stifling innovation for society?

Sherron Brugess [00:19:44] I do want to say one thing, which is, you know, it's something on AI that I think is really interesting. And here recently, there's been a new definition added into the dictionary, which is this idea of hallucination. And the idea that AI is drawing conclusions that are false, essentially. It made up the answer. Right. I think AI has, there are some beautiful parts about it. And it's so cool when you think about the innovation, the speed and all these things that would take, you know, lots of people and a lot of compute power to get through. The thing that I'm always worried about from an AI standpoint is the implications societally and what that may mean. And so, you know, I happen to have a teenager and when she gets to this place of being a productive adult in society, by that time, you know, machines will be making a lot of decisions. And so the idea that I'm concerned that society will lose its responsibility to check machines and to to really understand what is real and what isn't. And so I think that that's something that is kind of the ethos that we have to think about as we go into this new world and the promise of what AI is. We have to think about, you know, how do we not get consumed with it? How do we leverage it for innovation and the like, but how do we also make sure that we maintain the balance of what is real and what isn't, what is true and what is not? And that kind of idea of exploration from that side, you know, personally, societally, I think will be an important factor here. So, you know, I don't know how countries or regulation will kind of meter that, but I think it will be really important as we look at what AI and machines and all of these things can do for us from an innovation standpoint.

Shamla Naidoo [00:21:39] And you know, Emily, I do I do want to add something to this conversation because and it's not a prediction necessarily, but I think it's it's again, it's a bit of a call to action. I have seen many, many AI regulations that focus on the outcomes. What I would like to see regulators do is focus on the input, because the one place where you still have some regulatory or other control is at the point of origin, which is what data have I use to train my foundation models? What data am I going to use to kind of do self learning for my own organization? And, you know, I don't want people, and regulators in particular, trying to tell me how I should use my own data. But the foundation models, I want to make sure that those have been kept. Those have been taught. Those have been created with data that has integrity and data that has authenticity. Data that has a point of origin with somebody is responsible for it so that we know where it came from and we know that it's real. And so I find regulators rushing to regulate the outcomes, Well, don't do this. Don't do that. But few are actually talking about let's regulate at the point of origin. Where in an AI world, you have the most control. As soon as it's gotten into a model, it is out of your control. And all of us who are using it are going to be, we're going to fall foul of the regulation because we didn't train the original model and therefore the bias is already in there. The hallucinations are already in there. And so I'd like to see regulators rethink where they're regulating, not what they're regulating.

Emily Wearmouth [00:23:21] You guys can't help yourself. You take a prediction, you spin it into a call to action.

Sherron Brugess [00:23:26] it's an excellent point.

Emily Wearmouth [00:23:28] Well, then. All right. I'm going back to my goodie bag, and I feel like I should have a paper rustling sound effect as I do this. This next one is from James Christiansen, and he posits that we will see organizations interrogate promises of what zero means in zero trust, that they'll be looking for ways to implement continuous adaptive zero trust in much more granular ways. So with this one, where should we start? Well, what on earth does he mean by continuous adaptive trust? Can you help us there Shamla? Help us understand the prediction.

Shamla Naidoo [00:23:56] Yeah. You know, let's just start by saying society cannot function with zero trust. We have to trust someone in something at some point in order for us to function. So this idea of zero trust, zero trust is not a goal. It's the starting point for where we think about how we interact with technology. We don't want to trust anything by default. We don't want to trust it forever at any time, anywhere. So this idea of zero trust is the starting point. And, you know, typically we used to trust a user who wanted to connect to a network. And so it was a device connecting to the network. And we'd say, well, we trust your laptop, therefore we're going to let you connect to the network. And once you are a trusted user with the right credentials and you were a trusted device, you were connected to the network and you could do anything you wanted for as long as either that session remained or for the day or the week or the month, whatever it was that you allowed. I think what James is talking about is this idea that if you start there and you just trust the device and the person to do whatever they want to do. Well, you could have some bad outcomes later because we don't live in that world where everything is so well defined. And so this idea of adaptive trust is, you know, I want to trust you in the moment, in the circumstance. And then later on, I want to reset my trust and start that whole process again. Along with the workflow. I'm going to adapt the amounts of trust I give you. So I think that's the idea is not to let it happen at once and then continue it forever, but rather to have some punctuation along the way.

Emily Wearmouth [00:25:40] Lovely. Okay. I understand that now. So temptingly waving at me, the idea that we might stop obsessing about this zero trust label a little bit in in 2024, or at least allowing some nuance around it. Sherron, what are your thoughts around this prediction?

Sherron Brugess [00:25:54] I also think that it will be about what's in your control. There's so much that's not in your control. Right. So when you think about even these ideas of, you know, you're trusting through federated systems in that you're trusting either even using some of your, you know, you know, pulling some of your social kind of aspects into your corporate life. You'll see more of that blend here. You know, how much can you use or what can you use to say, yes, that is the person, yes, they should have that access and yes, they can have access to these resources. And so I think it'll be kind of a different way using different kinds of modalities to support that trust, that kind of trust discussion. I think that's what we'll start to see. So it should be really interesting on on what happens in that space. But yeah, I mean, we did drive ourselves crazy with this idea of zero trust. And again, another conversation with the board. Yeah. You want to do, we're going to do zero trust like do we really do we really want to do that. And so I think there will be some more conversations and varied ways of how we address trust in the future.

Emily Wearmouth [00:26:59] Okay. So zero trust will stay, but it's going to be less binary and we're going to build up a greater understanding of the elements that lead to the trust decision. Is that a fair way of looking at it?

Shamla Naidoo [00:27:11] Yeah, I think so, because, you know, it doesn't go away. But what it does demonstrate is that for a CISO in particular, you know, you're not starting where you trust everyone all the time. You have to you know, the starting point is you don't trust anyone, anywhere, any device, because the world is so integrated. In order to add to the trust, you're going to have to have multiple steps and multiple decisions and multiple controls and multiple solutions. And as you add to all of those, you're going to increase trust. And we know that trust accelerates business. And so our goal is to have as much trust as we can put into the system. But we can only get there if we have all these different levels and layers and places for controls that we can validate and check and balance, etc.. But the goal is to increase trust by starting with zero trust.

Emily Wearmouth [00:28:08] Right. I like that as much. A more positive way of thinking about it, right. I think I can see our producer is is waving at me that we're heading towards time. So I think that I might head off this afternoon a place if you bets. It all seems like really useful intelligence for the year ahead. And when this episode airs, I also think I'm going to be watching on the LinkedIn post because I've got a hunch that the opinionated masses of social media will have thoughts on what they agree with, what they disagree with, and that perhaps they will berate us for forgetting the most important thing that we should have discussed, I'm sure. So thank you very much, both of you, for making time today.

Sherron Brugess [00:28:38] Thank you.

Shamla Naidoo [00:28:39] Thank you. Emily, it's been great.

Emily Wearmouth [00:28:41] You have been listening to the Security Visionaries podcast, and I've been your fill in host, Emily Weymouth. If you enjoyed this episode, please share it and subscribe on your favorite podcast platform. You'll find lots of other interesting episodes already out there, and we publish a new one every two weeks, some hosted by me and some by Max. We'll catch you next time.

Subscribe to the future of security transformation

By submitting this form, you agree to our Terms of Use and acknowledge our Privacy Statement.