Die Lösung Ihrer SASE- und Zero Trust-Transformationsherausforderungen beginnt mit Netskope bei RSA. Mehr erfahren.

Schließen
Schließen
  • Warum Netskope? Chevron

    Verändern Sie die Art und Weise, wie Netzwerke und Sicherheit zusammenarbeiten.

  • Unsere Kunden Chevron

    Netskope bedient mehr als 3.000 Kunden weltweit, darunter mehr als 25 der Fortune 100

  • Unsere Partner Chevron

    Unsere Partnerschaften helfen Ihnen, Ihren Weg in die Cloud zu sichern.

Immer noch am Höchsten in der Ausführung.
Immer noch am Weitesten in der Vision.

Erfahren Sie, warum Netskope im Gartner® Magic Quadrant ™ 2024 zum dritten Mal in Folge zum Leader für Security Service Edge ernannt wurde.

Report abrufen
Netskope Named a Leader in the 2024 Gartner® Magic Quadrant™ for Security Service Edge graphic for menu
Wir helfen unseren Kunden, auf alles vorbereitet zu sein

Unsere Kunden
Lächelnde Frau mit Brille schaut aus dem Fenster
Die partnerorientierte Markteinführungsstrategie von Netskope ermöglicht es unseren Partnern, ihr Wachstum und ihre Rentabilität zu maximieren und gleichzeitig die Unternehmenssicherheit an neue Anforderungen anzupassen.

Erfahren Sie mehr über Netskope-Partner
Gruppe junger, lächelnder Berufstätiger mit unterschiedlicher Herkunft
Ihr Netzwerk von morgen

Planen Sie Ihren Weg zu einem schnelleren, sichereren und widerstandsfähigeren Netzwerk, das auf die von Ihnen unterstützten Anwendungen und Benutzer zugeschnitten ist.

Whitepaper lesen
Ihr Netzwerk von morgen
Vorstellung der Netskope One-Plattform

Netskope One ist eine cloudnative Plattform, die konvergierte Sicherheits- und Netzwerkdienste bietet, um Ihre SASE- und Zero-Trust-Transformation zu ermöglichen.

Erfahren Sie mehr über Netskope One
Abstrakt mit blauer Beleuchtung
Nutzen Sie eine Secure Access Service Edge (SASE)-Architektur

Netskope NewEdge ist die weltweit größte und leistungsstärkste private Sicherheits-Cloud und bietet Kunden eine beispiellose Serviceabdeckung, Leistung und Ausfallsicherheit.

Mehr über NewEdge erfahren
NewEdge
Netskope Cloud Exchange

Cloud Exchange (CE) von Netskope gibt Ihren Kunden leistungsstarke Integrationstools an die Hand, mit denen sie in jeden Aspekt ihres Sicherheitsstatus investieren können.

Erfahren Sie mehr über Cloud Exchange
Netskope-Video
  • Edge-Produkte von Security Service Chevron

    Schützen Sie sich vor fortgeschrittenen und cloudfähigen Bedrohungen und schützen Sie Daten über alle Vektoren hinweg.

  • Borderless SD-WAN Chevron

    Stellen Sie selbstbewusst sicheren, leistungsstarken Zugriff auf jeden Remote-Benutzer, jedes Gerät, jeden Standort und jede Cloud bereit.

  • Secure Access Service Edge Chevron

    Netskope One SASE bietet eine Cloud-native, vollständig konvergente SASE-Lösung eines einzelnen Anbieters.

Die Plattform der Zukunft heißt Netskope

Intelligent Security Service Edge (SSE), Cloud Access Security Broker (CASB), Cloud Firewall, Next Generation Secure Web Gateway (SWG) und Private Access for ZTNA sind nativ in einer einzigen Lösung integriert, um jedes Unternehmen auf seinem Weg zum Secure Access Service zu unterstützen Edge (SASE)-Architektur.

Netskope Produktübersicht
Netskope-Video
Next Gen SASE Branch ist hybrid – verbunden, sicher und automatisiert

Netskope Next Gen SASE Branch vereint kontextsensitives SASE Fabric, Zero-Trust Hybrid Security und SkopeAI-Powered Cloud Orchestrator in einem einheitlichen Cloud-Angebot und führt so zu einem vollständig modernisierten Branch-Erlebnis für das grenzenlose Unternehmen.

Erfahren Sie mehr über Next Gen SASE Branch
Menschen im Großraumbüro
Entwerfen einer SASE-Architektur für Dummies

Holen Sie sich Ihr kostenloses Exemplar des einzigen Leitfadens zum SASE-Design, den Sie jemals benötigen werden.

Jetzt das E-Book lesen
Steigen Sie auf marktführende Cloud-Security Service mit minimaler Latenz und hoher Zuverlässigkeit um.

Mehr über NewEdge erfahren
Beleuchtete Schnellstraße mit Serpentinen durch die Berge
Ermöglichen Sie die sichere Nutzung generativer KI-Anwendungen mit Anwendungszugriffskontrolle, Benutzercoaching in Echtzeit und erstklassigem Datenschutz.

Erfahren Sie, wie wir den Einsatz generativer KI sichern
ChatGPT und Generative AI sicher aktivieren
Zero-Trust-Lösungen für SSE- und SASE-Deployments

Erfahren Sie mehr über Zero Trust
Bootsfahrt auf dem offenen Meer
Netskope erhält die FedRAMP High Authorization

Wählen Sie Netskope GovCloud, um die Transformation Ihrer Agentur zu beschleunigen.

Erfahren Sie mehr über Netskope GovCloud
Netskope GovCloud
  • Ressourcen Chevron

    Erfahren Sie mehr darüber, wie Netskope Ihnen helfen kann, Ihre Reise in die Cloud zu sichern.

  • Blog Chevron

    Erfahren Sie, wie Netskope die Sicherheits- und Netzwerktransformation durch Security Service Edge (SSE) ermöglicht

  • Events und Workshops Chevron

    Bleiben Sie den neuesten Sicherheitstrends immer einen Schritt voraus und tauschen Sie sich mit Gleichgesinnten aus

  • Security Defined Chevron

    Finden Sie alles was Sie wissen müssen in unserer Cybersicherheits-Enzyklopädie.

Security Visionaries Podcast

Innovationsförderung für eine Belegschaft, die zunehmend remote arbeitet
In dieser Folge taucht Moderator Max Havey in die Welt der Fernarbeit und Innovation ein.

Podcast abspielen
Innovationsförderung für eine Belegschaft, die zunehmend remote arbeitet
Neueste Blogs

Lesen Sie, wie Netskope die Zero Trust- und SASE-Reise durch Security Service Edge (SSE)-Funktionen ermöglichen kann.

Den Blog lesen
Sonnenaufgang und bewölkter Himmel
SASE Week 2023: Ihre SASE-Reise beginnt jetzt!

Wiederholungssitzungen der vierten jährlichen SASE Week.

Entdecken Sie Sitzungen
SASE Week 2023
Was ist Security Service Edge?

Entdecken Sie die Sicherheitselemente von SASE, die Zukunft des Netzwerks und der Security in der Cloud.

Erfahren Sie mehr über Security Service Edge
Kreisverkehr mit vier Straßen
  • Unternehmen Chevron

    Wir helfen Ihnen, den Herausforderungen der Cloud-, Daten- und Netzwerksicherheit einen Schritt voraus zu sein.

  • Leadership Chevron

    Unser Leadership-Team ist fest entschlossen, alles zu tun, was nötig ist, damit unsere Kunden erfolgreich sind.

  • Kundenlösungen Chevron

    Wir sind für Sie da, stehen Ihnen bei jedem Schritt zur Seite und sorgen für Ihren Erfolg mit Netskope.

  • Schulung und Zertifizierung Chevron

    Netskope-Schulungen helfen Ihnen ein Experte für Cloud-Sicherheit zu werden.

Unterstützung der Nachhaltigkeit durch Datensicherheit

Netskope ist stolz darauf, an Vision 2045 teilzunehmen: einer Initiative, die darauf abzielt, das Bewusstsein für die Rolle der Privatwirtschaft bei der Nachhaltigkeit zu schärfen.

Finde mehr heraus
Unterstützung der Nachhaltigkeit durch Datensicherheit
Denker, Architekten, Träumer, Innovatoren. Gemeinsam liefern wir hochmoderne Cloud-Sicherheitslösungen, die unseren Kunden helfen, ihre Daten und Mitarbeiter zu schützen.

Lernen Sie unser Team kennen
Gruppe von Wanderern erklimmt einen verschneiten Berg
Das talentierte und erfahrene Professional Services-Team von Netskope bietet einen präskriptiven Ansatz für Ihre erfolgreiche Implementierung.

Erfahren Sie mehr über professionelle Dienstleistungen
Netskope Professional Services
Mit Netskope-Schulungen können Sie Ihre digitale Transformation absichern und das Beste aus Ihrer Cloud, dem Web und Ihren privaten Anwendungen machen.

Erfahren Sie mehr über Schulungen und Zertifizierungen
Gruppe junger Berufstätiger bei der Arbeit
Miniaturansicht des Beitrags

Was wäre in diesem endlosen Hype das Einzige an der KI, von dem Sie sich wünschen, dass es mehr diskutiert würde? In dieser Folge setzt sich Moderatorin Emily Wearmouth mit Neil Thacker, EMEA CISO, Yihua Liao, Leiterin von Netskope AI Labs, und Suzanne Oliver, Direktorin für IP-Strategie bei Scintilla, zusammen, um die Themen im Bereich KI zu besprechen, die sie den Menschen wünschen diskutierten mehr.

Ich denke, diese Ära schreit sicherlich nach etwas mehr Transparenz darüber, was KI ist und was nicht und was ML ist. Sie werden oft synonym verwendet und sind tatsächlich sehr unterschiedlich. Ich denke, dieser Hype ist groß, weil es an Verständnis dafür mangelt, was diese Systeme leisten können und was nicht.

—Suzanne Oliver, Direktorin für IP-Strategie bei Scintilla
Suzanne Oliver

 

Zeitstempel

*(0:01): Einführungen*(14:54): Welche Fragen stellen Sie technischen KI-Leitern als Sicherheitsleiter?
*(1:28): Über welche Sache im Zusammenhang mit KI würde Ihrer Meinung nach mehr diskutiert werden?*(20:24): Wie führen Sie Gespräche über Transparenz und Eigenverantwortung im Zusammenhang mit KI?
*(3:48): Wem sollten diese Definitionen gehören?*(27:02): Wem sollten die Definitionen und Standards der KI gehören?
*(5:35): Irgendwelche Beispiele, bei denen Sie KI gesehen haben, nach der die Leute vielleicht nicht suchen würden?*(29:20): Wie viel Verantwortung sollte Ihrer Meinung nach von den Entwicklern selbst getragen werden?
*(8:30): Was halten Sie von der ständigen Umbenennung von allem in ChatGPT?*(30:48): Abschließende Gedanken

 

Andere Möglichkeiten zum Zuhören:

grünes plus

In dieser Folge

Neil Thacker
CISO, EMEA bei Netskope

Chevron

Neil Thacker

Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union General Data Protection Regulation (EU GDPR).

LinkedIn-Logo

Suzanne Oliver
Direktor für IP-Strategie bei Scintilla

Chevron

Suzanne Oliver

Suzanne ist eine doppelt qualifizierte britische und europäische Patentanwältin und ist seit Kurzem dem Scintilla-Team als Direktorin für IP-Strategie beigetreten. Vor ihrer Karriere im Bereich geistiges Eigentum war sie mehrere Jahre im Ingenieurwesen tätig. Suzanne arbeitete intern bei einem Technologie- und Robotik-Start-up, GroundWOW, und leitete deren IP-Strategie sowie die Entwicklung von IP-Richtlinien. Zuvor war sie für Operations, Legal und IP bei SeeChange Technologies verantwortlich, einer hundertprozentigen Tochtergesellschaft und späteren Ausgliederung von ARM Ltd. Vor ihrer Arbeit bei SeeChange Technologies leitete Suzanne die globalen Patent- und Markenverfolgungsteams von ARM und unterstützte ein vielfältiges Portfolio an IP-Strategien, -Richtlinien und -Entwicklungsangelegenheiten für ARM.

Zusätzlich zu ihren beeindruckenden beruflichen Qualifikationen hat sie mehrere NED- und Vorstandspositionen inne, zuletzt als Präsidentin der UK IP Federation. Dabei beriet sie die britische Regierung im Bereich Innovation und geistiges Eigentum.

Suzanne hat sich im Laufe ihrer Karriere für Frauen in den Bereichen MINT, Inklusion und Vielfalt eingesetzt und wurde 2017 von IAM als „World Leading IP Strategist“ und 2018 und 2019 als Managing IP „Corporate IP Star“ ausgezeichnet. Für ihre Arbeit im Bereich Corporate Diversity and Inclusion Leadership wurde sie 2018 mit einem nationalen Preis ausgezeichnet.

LinkedIn-Logo

Yihua Liao
Leiter der Netskope AI Labs bei Netskope

Chevron

Yihua Liao

Dr. Yihua Liao is the Head of AI Labs at Netskope. His team Develops cutting-edge AI/ML technology to tackle many challenging problems in cloud security, including data loss prevention, malware and threat protection, and user/entity behavior analytics. Previously, he led data science teams at Uber and Facebook.

LinkedIn-Logo

Emily Wearmouth
Direktorin für internationale Kommunikation und Inhalte bei Netskope

Chevron

Emily Wearmouth

Emily Wearmouth leitet die Kommunikation von Netskope in EMEA, LATAM und APAC. Emily arbeitet in den Bereichen Öffentlichkeitsarbeit, soziale Medien, Kundenreferenzen und Inhaltserstellung und ist damit beschäftigt, Geschichten aufzudecken und sie auf eine Weise zu erzählen, die Kunden und Interessenten hilft, zu verstehen, was Netskope für sie tun kann.

LinkedIn-Logo

Neil Thacker

Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union General Data Protection Regulation (EU GDPR).

LinkedIn-Logo

Suzanne Oliver

Suzanne ist eine doppelt qualifizierte britische und europäische Patentanwältin und ist seit Kurzem dem Scintilla-Team als Direktorin für IP-Strategie beigetreten. Vor ihrer Karriere im Bereich geistiges Eigentum war sie mehrere Jahre im Ingenieurwesen tätig. Suzanne arbeitete intern bei einem Technologie- und Robotik-Start-up, GroundWOW, und leitete deren IP-Strategie sowie die Entwicklung von IP-Richtlinien. Zuvor war sie für Operations, Legal und IP bei SeeChange Technologies verantwortlich, einer hundertprozentigen Tochtergesellschaft und späteren Ausgliederung von ARM Ltd. Vor ihrer Arbeit bei SeeChange Technologies leitete Suzanne die globalen Patent- und Markenverfolgungsteams von ARM und unterstützte ein vielfältiges Portfolio an IP-Strategien, -Richtlinien und -Entwicklungsangelegenheiten für ARM.

Zusätzlich zu ihren beeindruckenden beruflichen Qualifikationen hat sie mehrere NED- und Vorstandspositionen inne, zuletzt als Präsidentin der UK IP Federation. Dabei beriet sie die britische Regierung im Bereich Innovation und geistiges Eigentum.

Suzanne hat sich im Laufe ihrer Karriere für Frauen in den Bereichen MINT, Inklusion und Vielfalt eingesetzt und wurde 2017 von IAM als „World Leading IP Strategist“ und 2018 und 2019 als Managing IP „Corporate IP Star“ ausgezeichnet. Für ihre Arbeit im Bereich Corporate Diversity and Inclusion Leadership wurde sie 2018 mit einem nationalen Preis ausgezeichnet.

LinkedIn-Logo

Yihua Liao

Dr. Yihua Liao is the Head of AI Labs at Netskope. His team Develops cutting-edge AI/ML technology to tackle many challenging problems in cloud security, including data loss prevention, malware and threat protection, and user/entity behavior analytics. Previously, he led data science teams at Uber and Facebook.

LinkedIn-Logo

Emily Wearmouth

Emily Wearmouth leitet die Kommunikation von Netskope in EMEA, LATAM und APAC. Emily arbeitet in den Bereichen Öffentlichkeitsarbeit, soziale Medien, Kundenreferenzen und Inhaltserstellung und ist damit beschäftigt, Geschichten aufzudecken und sie auf eine Weise zu erzählen, die Kunden und Interessenten hilft, zu verstehen, was Netskope für sie tun kann.

LinkedIn-Logo

Episodentranskript

Offen für Transkription

Emily Wearmouth [00:00:01] Hi, and welcome to this edition of Security Visionaries, a podcast for anyone working in the cybersecurity and data ecosystems. I'm your host, Emily Wearmouth, and this week I have three amazing guests who bring three different perspectives to a discussion that I wanted to have with them all about AI. So first off, let me introduce everybody. Firstly, we have Yihua Liao who's a data scientist who's worked for all the big names, really, Facebook, Microsoft, Uber, and he's now the head of the AI Labs at Netskope. So welcome, Yihua.

Yihua Liao [00:00:31] Thank you. Glad to be here.

Emily Wearmouth [00:00:32] Next, we've got Neil Thacker, who's a chief information security officer and a very experienced data protection officer. He's worked with major names like Swiss Re, Deutsche Bank, Camelot, the lottery company. And he's also served as an advisor for both ENISA and the Cloud Security Alliance. Welcome Neil.

Neil Thacker [00:00:49] Thank you, Emily. A great pleasure to be here.

Emily Wearmouth [00:00:51] And then finally, Suzanne Oliver is our intellectual property expert today. Suzanne works in private practice as an IP strategist and a lawyer at Cintra. She used to run the IP department at ARM and is also no stranger at Westminster in the U.K., where she represents a number of IP bodies and AI and machine learning associations at a government level. I first met Suzanne at London Tech Week, where we had a great conversation while trying to eat politely for little cardboard lunch boxes. So I'm really pleased that she's joining us today so I can have a second chance at a first impression. Welcome, Suzanne.

Suzanne Oliver [00:01:23] Well, thank you. And yeah, it was a great lunch. Great. Thank you.

Emily Wearmouth [00:01:28] So AI is a really hyped topic, and I think you'd struggle to find anybody that has not talked about AI this year. And so you might wonder, what can we add to this huge pool of discussion? And really, that's the thing I wanted to do today. I wanted to turn that question around to our panelists. And so I've asked each of them to be prepared to answer one question from me. And that one question is what one thing, amid all of this chat and the hype around AI, would you really like to see being discussed more? Everyone comes to this from a slightly different angle, so let's see what their answers are. And Neil, I'm going to start with you. So it's a bit like blind date. First question to you, please Neil.

Neil Thacker [00:02:05] Um, yeah, I mean, it's a great question. I think. I mean, for instance, I'd love to see everybody stop using chatGPT, AI and ML is a synonyms, so I think it helps for us to kind of move away from that so we can better understand and raise awareness of how pervasive AI is today. I think we generally say it's generally underreported. So it's similar to saying somebody we had in the past, somebody saying an organization saying, oh, for instance, we're using the cloud, but actually they're using thousands of cloud apps and each one is performing slightly different task. And I think we have that same challenge with AI. It's already quite pervasive across, again, organizations and of course, consumers as well. They are utilizing these tools and services. So again, that would be the number one. And of course, I mean, it matters because of a number of key areas. So I mean, one is just the general awareness of the current use of AI and also being aware it's not a future technology. It's happening today. I think, secondly, we need to understand the people and kind of business and data that consumers know that A.I. isn't just a specific app. So a bit like saying it's ChatGPT. I mean, my pet hate is when I hear something say I hear somebody talking about AI and they say it's it's AI such as ChatGPT it's a common misconception that there's only a few apps out there today utilizing AI type capability. So it lives in many of the apps we use today. It's being used, it's pervasive. It's critical for organizations and consumers to understand what reason, what data is being processed, what outcome is intended through the use of that app and service. So those are the kind of the key aspects that I see.

Emily Wearmouth [00:03:48] Brilliant thanks Neil. I've got I'm looking at Suzanne's reaction that I've got a question for you actually on the on the back of this. Who do you think should be owning these definitions? You know, Neil doesn't want it to be defined as just ChatGPT or is there a common definition or standard for AI? And if so, who doesn't it or who should own it?

Suzanne Oliver [00:04:07] Again, really good question. No one does own it today. And there are other standards that are used to describe levels, for example, autonomy levels in terms of autonomous cars and vehicles. And I think this era is certainly crying out for a little bit more transparency about what is and what isn't AI and what is ML. They're Often used interchangeably, and they are actually very different. So I think there's a lot of hype, and I think theres a lot of hype is because there's a lack of understanding of what these systems can and can't do. But on the flip side, there's a lack of understanding of who owns what goes in and who owns what comes out to pick up on one of Neil's points, which is my biggest is and I think Netskope in one of your reports have highlighted the amount of source code that's actually input into these tools. Proprietary source code may be a lack of understanding that these tools don't claim to keep that secret or don't claim to allow you to keep it as your own. And they claim to own any output as well. So that whole area of the transparency of who owns what. What's it called and what's it do, I think yeah. Is right for maybe not standardization but right for perhaps a new language to be created that helps us communicate these these aspects more clearly to to people who don't necessarily understand them straight away.

Emily Wearmouth [00:05:35] Neil, have you got an example, you talk about A.I. being pervasive already and it being a technology of today, not of the future. Have you got any examples of where you've seen AI in action that people wouldn't perhaps know to look for it there?

Neil Thacker [00:05:47] Yeah, I mean, we've seen this the introduction of lots of A.I. assistants, especially in meetings and video conferencing meetings, etc.. So it's quite common now join I'll join a meeting and there'll be an AI assistant sometimes taking somebody's place, which is always interesting. But yeah it's obviously straight away, I mean I do check for attendees in meetings. Not everybody does. Not everybody realizes that there's an AI assistant. And I think it's I mean, it's perhaps it's okay. For instance, if the assistant is taking notes and or perhaps documenting the minutes of the meeting, which is great because most people don't like doing that. So that's great. Let's that's automate that capability. But of course, if that if that AI system is performing some additional sentiment analysis, perhaps it's looking at mood scoring, perhaps I've seen this I've seen this A.I. assistant does mood scoring every 5 seconds, which kind of worries me slightly. I mean, even we have this I've seen some some services using IQ scoring, so kind of listening to how people are talking and interacting and trying to assess their IQ. Now, of course, if you're going to do something like that in a meeting, you should tell somebody beforehand because they might want to go and check the the vendor, the provider of that service, the AI assistant, perhaps their privacy policy first. You might want to explain that upfront. So of course, if you're having a bad day, then perhaps they quite I don't know. I mean, one of those things is is going to catch somebody out at some point in time. So I mean, that's just one example, perhaps a crude example, but I think that's where we need to better understand the capabilities of the service is what is capable. It's not just about recording and and creating meeting minutes. There's so much possibility there. And it's it's making people just aware of that. Now, as one example, I actually went back to a provider, one of these services, and I asked them, okay, what is your privacy policy state? And their response, don't worry, it's fully covered. And regardless, this is the future. So just accept it. So straightaway, alarm bells are ringing. Of course I of course. I then had to dig deeper into finding out exactly what their service is offering. So I think again, it's quite hard for people to understand this. And again, sometimes it's too late. They're put on the spot. They have to make a quick assessment, a judgment on these services. So yeah, it's about really understanding that understanding. Again, a level of trust perhaps from the provider of these technologies and services.

Emily Wearmouth [00:08:12] I'd say it's quite bold to at this point send AI to a meeting in place of you. All the conversations about a people going to lose their jobs that sort of willingly walking that path, isn't it?

Neil Thacker [00:08:23] Yeah, I guess the challenge is when we set, we all send our own AI assistant to the meeting. I mean, what do they discuss? I'd love to be a fly on the wall.

Emily Wearmouth [00:08:30] With this. Brilliant. Yihua. I wanted to bring you in. At this point, you're doing a lot of work building AI systems and writing models for machine learning models. What is your thought around the sort of persistent relabeling of everything as ChatGPT Do you find that frustrating?

Yihua Liao [00:08:49] It is, you know, because I feel like AI-washing is is definitely a problem. You know, it's, it's obscuring the clarity in an understanding of AI. So I would definitely like to see more discussion about the way in which, you know, security companies and perhaps tech companies in general, you know, how we are building AI and ML, you know, what's the input for the model for the AI and what's the output and how reliable is the output? Right? And I feel like there's a lack of understanding and lack of transparency on those aspects. So as a result, I think there are a few misconceptions. You know, the way I see, you know, usually some people may say that, hey, I can do everything. You know, it's going to it's going to take our jobs and, you know, it's going to control all we do. And then there are also people saying like, you're training with my data and I don't want you to use my data to improve, to help my competitors. So I think perhaps since I did the Netskope AI Labs and over the years, we've actually developed a lot of A.I. capabilities at Netskope and before this whole general A.I. frenzy. So perhaps I could share some of my perspectives and how we build machine learning and AI models at Netskope. So first of all, we we actually have built a lot of A.I. models at Netskope, including models, to identify malware, phishing websites, and sensitive data. I think Suzanne mentioned the source code classifier, that's something that my team actually built a couple of years ago. Then we also use machine learning to identify anomalous user behavior, which may indicate of data breach or insider threat and so on. So at a high level and the way what goes into our models is is the data that we try to gather from different different sources, including data in public domains or data that we acquired from third-parties. And so we would never use our customers data to build out machine learning models without their permission. So as you can imagine, some of the machine learning models, for example, like a user behavior analysis, for that, we do need to look at each individual, individual user's normal behavior. But even for that kind of model, first of all, we have permission from our customers. Second of all, we are looking at some other information we're not looking at, for example, when the user is downloading a file, we don't necessarily need to look at the the file content in order to decide whether that behavior is is abnormal or not. Right. So so yeah, so that's basically what, you know, the what goes into the model. Now in terms of the output of the model. And you know, a lot of there's a lot of you know, nowadays almost every company is claiming that they're doing AI, how accurate the AI models are. But I would say that if someone tells you that their AI is is 100% accurate, I would say they are lying. You know, at the end of the day, AI is really a you can think of as a probability, you know, how likely something would happen based on the training data they have. So, you know, you always need some either human in the loop to to verify the output of the AI model or perhaps some kind of feedback loop in inside your product so that you can take that feedback and retrain your model and make the model better over time. So AI is really a innovative process. It's a journey, and you can never expect the model is 100% accurate and or even like 99% accurate in the first time. You always have to iterate over time.

Emily Wearmouth [00:13:30] I can see lots of nods from both Suzanne and Neil. Do feel free to jump the something you wanted to add.

Suzanne Oliver [00:13:37] Yeah, I think the reliability point I was sort of really vehemently agreeing with and I think that's the lack of understanding especially, you know, I know this is Neil's pet hate, when you go from AI straight to ChatGPT. It is a quite interesting example in itself that can be used as, you know, it presents incorrect data as correct because it has no understanding of right and wrong of true. You know, it's just the probability that that is the answer that you're looking for. It's how it operates. And I think this is where, you know, my earlier point of about education and skills to understand that these things are tools. And like humans, they're fallible, but they're fallible in different ways. Right. And I think it's thought that we as a society need to understand little bit better that therefore. But the point about, you know, having this transparency of how we describe them, maybe we need to categorize them in different sort of functional ways and categorize the risk output like the autonomy levels those earlier, but yeah, sort of yeah nodding, nodding vehement agreement.

Emily Wearmouth [00:14:54] And Neil, I wanted to ask you and you have something else to say, so feel free to answer both. But whether some of the points that Yihua was picking up on resonated with you as someone whose job it is to protect data. What sort of questions are you asking someone like Yihua at other tech companies to really get to the bottom of what these systems are being built to do?

Neil Thacker [00:15:16] Yeah, I mean, it always comes back down to the data, right? So questions around what happens to the data that's being input as part of an input query. For instance, if you're using genAI what happens to the model is the model being used to, to, to provide additional services privately or publicly and also what an analyzation controls. And obviously it was mentioned there around again an example of an organization controls that are implemented, but is also then what happens to the output and is there any kind of integrity check performed on the output and can the output then be also used to go through and pre train further models? Right. So this is another aspect that again, you could you can of course go into a loop and further harness, further refine the output query based on a series of kind of feedback loops utilizing AI services. But again, it's important to understand that again, how old is that data? How many iterations is that data gone through? But then it also comes down to things like, I mean, for instance, what country? I think this is something definitely we're going to be asking more questions about in the near future as perhaps more regulations come in to to protect individuals. And we would be hearing that the EU AI act as a law that will come into place protecting EU citizens. But, of course, that usually means that there's a there's going to be a follow up series of other regulatory requirements and regulations that come in from other countries who want to continue doing business with the EU, those kind of things. We saw that we GDPR is one example. So those are things to be aware of. One thing I'd also add is that we I think we're all aware a perhaps on this call and perhaps others are aware of this the hallucination phenomenon where AI will and can occasionally just create imaginative and creative content. And it's not based on fact or truth. I saw this recently as an example. Somebody queried again, "What's the world record for crossing the channel on foot?" And it gave a name date and a crossing time.

Emily Wearmouth [00:17:20] That was my record, Neil.

Neil Thacker [00:17:21] So yeah, I mean, 14 hours, 51 minutes.

Emily Wearmouth [00:17:24] It took me ages. Yeah, Yeah.

Neil Thacker [00:17:26] But for some people they think, well, perhaps again it was correct. Perhaps somebody actually crossed the again the channel using the Channel Tunnel, for instance. But then the follow up content said that it should only be attempted by professional swimmers. So you can kind of think that there is some hallucination. So that's just one example. But yeah, I think we need to be aware of the data, the integrity, how it's being protected and also all the regulations that are likely to be coming in. But we're actually trying to protect, again, citizens around the use of these services and what data was actually being processed.

Yihua Liao [00:17:59] Yeah. So, Neil, that's is so true. I mean, hallucination is certainly a challenge for, you know, practitioners like me, we're trying a lot of new things, trying to minimize the likelihood of hallucination. But I would also like to mention that, you know highlight the thuing that you mentioned earlier. A.I. is really about the data, right. Your AI is as good as your training data. You know, if if you don't have reliable, high quality data, if your data is biased and your model is not going to perform really well, I'm sure some of you have heard the story that some of the face recognition AI models, they are less accurate. And when it comes to darker skin or women, so there is some gender and age bias in the training data. So it's a that's an issue for us for, you know, security companies like ours, because most of our machine learning models, we don't look at things like age, gender and other PII information. But I would argue that still it's possible that the training that we use to train our AI models may not actually represent what we see in the real world. So, you know, as a data scientist or ML scientist, we always try to improve the quality of our training data so that it's more representative of what we see in the real world.

Neil Thacker [00:19:44] Yeah, I mean, we're also seeing, for instance, the whole supply chain and the economics of A.I., right? There's organizations that all supply data that can be used for training those kind of things, and that we start talking about data integrity there as well. And where was that data obtained is a bit like the marketing discussion. Where was that information obtained from? Was it obtained with consent? All those kind of things. So that also draws into a a discussion, right? The whole economic survey on the supply chain where that data has come from, who has given permission or approval to process that data. Right. There's lots of, I guess, requirements and challenges that organizations need to go through as they're starting to look at AI and the use of AI in their organization.

Emily Wearmouth [00:20:28] Definitely. And I think this is possibly it segues quite nicely into how you would answer the question. Suzanne. We had a pre chat, introduce your answer. So how would you answer the question? What's the one thing you would like to be discussed more?

Suzanne Oliver [00:20:42] Yeah, I think I've touched on it already. It's definitely around the transparency of what goes in and what's coming out and who owns that. I mean, data per se can't be owned per se, and that's probably a whole different topic in itself, so I won't go into that. But I think, building on Yihua's point about unconscious bias. You know there's 35 minimum cognitive biases that we have as humans, and I could probably have named three if you'd asked me before this call. So how can we expect that data that is going into these tools is representative of us if we don't even understand us to begin with? So that's one, one element of my answer of my book, I should say. The other is really who owns the output. So from my perspective, AI mostly touches on copyright and copyright ownership. So for example, if I upload some photos into one of these tools and it creates a sort of takes one of my photos and puts an amendment on it, then that amendment is owned maybe by the tool. So for example here it would be OpenAI and ChatGPT, but I own the original photo so the output is a new piece of work, for example, but it maybe potentially infringes something that has happened before because you look backwards with infringement and the innovation moves forward. So again, these are very sort of difficult concepts with layperson to understand and business people as well. But there's very little, unless you're a bit of a sort of IP geek like me, there's very little discussed about it. And certainly the language is not easy. Copyright is not an easy subject to get your head around full stop. So I think as technology leaders, we need to be the ones making the conversation more transparency. And this comes back to the point I was making earlier about having a common language that we define to talk about the data that goes in, the data that comes out and the tool itself so that we can really understand and monitor and standardize some of what's going on to make it easier to to understand what's going on.

Emily Wearmouth [00:23:04] From your perspective, to what extent are we now trying to close the stable door after the horse has bolted? I mean, you talk about things like Facebook have owned my holiday snaps for over a decade. Lucky them. But for consumers, it feels like to some extent that horse has bolted in terms of data ownership to feed these systems. Is that the case for corporations or is there still a chance to shut the door?

Suzanne Oliver [00:23:26] I'm not sure we're ever going to shut the door, but I think there's no point sitting there and watching the horse sort of run away and the distance. I think there's maybe a bit of corraling to be done. And for me, it's the speed of change. It's the speed of change in terms of organizations, not not knowing where their golden nugget data is, who's handling it, who is uploading into the cloud. And you can only you can only manage my exception. But you don't want those exceptions to happen. Right. So I think it's really understanding about your engineer behavior, your marketing people's behavior and having those conversations with them about, okay, these tools are great. They're going to help save you some time. But do you realize that when you, have these this secretarial tool running in the background, it's on a server in a country where maybe actually you don't want your monthly executive board meeting minutes being stored on a server in that country, no matter how much time it saves you from writing those minutes up. So whilst it seems like an easy thing to do, seems like an efficient tool to use, there are actually downsides. And it's just about having that little bit of brain power to say, okay, you know, every positive has there has to be a catch. Here Neil alluded to earlier is asking those questions about, okay, this tool seems really efficient, but actually why is it free? Why is it cheap? Why is it cost less than having a person sat there, you know, writing the minutes or sharing it? And there's got to be a there's got to be that other side to the equation. And you need to be asking yourself that question, I think.

Neil Thacker [00:25:15] Yeah, I think I mean, one of the concerns I have generally is that today we're already seeing organizations, for instance, issue questionnaires to better understand the use of AI in a product in a service. The challenge you always have is a questionaire is usually, again, a one off, perhaps on the adoption of a new tool, new technology that may be on AI, or it might be an annual review of that tool. But I think it always needs to be a stage in time where we're almost doing this more regularly, more continuously, based on the, Suzanne, as you highlighted, this fast adoption, this fast maturity growth in these services. So I mean, a tool that you might adopt and kind of today is going to look very different in a week's time or in two weeks time or in a month's time as new features and capabilities get added. So it almost needs to be a continuous assessment. I'm a big fan of looking at scoring apps and services and using this to define policy based on that. And again, it could be, for instance, I mean even we're seeing this challenge around apps and services that have been compromised. The rules that they've put in place, the providers wouldn't put in place to protect the service have been broken. Like most things people are going to hack, hackers are going to hack. Right. So you need to be aware of those kind of things as well. And yeah, there's also some I mean, some far fetched use cases where tools are coming in perhaps that are crossing, crossing the line in terms of ethics and that organization, for instance, that are employees in the organization, for instance, are utilizing and perhaps that may cross over their own ethics policy. So again, that needs to be also part of that assessment, that trust score. That's where really from an organizational perspective, organizations should be looking at providing better oversight governance around the use of AI services.

Emily Wearmouth [00:27:02] I asked you at the beginning, Suzanne, who should be owning the definitions and the standards. And in some ways, this is a repeat of that question of who should be owning the adjudication of these sorts of decisions. Neil alluded to the EU's AI acts coming up. And we've seen lots of posturing from governments around the world trying to be seen to take a leadership role in the development of AI. But is it realistically something that can be regulated or should it be coming from industry? Where do you see leadership coming from?

Suzanne Oliver [00:27:34] I think it always works when it comes from industry, personally speaking. But at the end of the day, I think standards bodies do do a really good job. So the autonomy of this six levels, 0 to 5 autonomy levels are I'm maintained by SEA I think, which is an autonomous automotive organizationm unless I'm mistaken. You've got the Internet is at present regulated by independent bodies. So, you know, my vote would be for independent body. But from an industry perspective, otherwise I don't think I'll stick. Um, but I certainly and it's just been mentioned as well on this call, I think Yihua mentioned the concept of greenwashing. I don't think it should be self-certification. There's too much of that with some carbon and carbon offsets and those sort of things that, you know, we are, you know, I think ethically, you know, green or whatever with a little sort of ethics tech mean, I've seen far too many startups that have one ML algorithm running on their machinery and then they have all over their sort of pitch decks you know where I company I machinery, you know, they're using one off the shelf computer vision algorithm. I think that as well is not is not helping anybody understand what it is, what it's there for what value it's bringing as well. But to quite a lot of points that both Neil and you off made on this call.

Emily Wearmouth [00:29:20] Yihua, what about you as someone who's developing this stuff, how much responsibility do you think should be shouldered by the developers themselves and how much is that unfair? And you know, you guys should be left to create things that that as a wider society we perhaps regulate.

Yihua Liao [00:29:32] I think we as as AI practitioners should also have a lot of responsibility when it comes to responsible AI. Here at Netskope we have a internal AI governance committee to help us set the AI strategy, set up the review processes and so on. And every time we start working on a new initiative or new AI model, we always have to go through a very robust internal security and privacy review process, and we have to fill out questionnaires and then make it really clear, you know, what goes into the model. What the model, how the model is going to be used, is there any any privacy concerns and so on. So a few, I do think not just government and the industry, but also every every company and all the AI practitioners should be aware of this and take this seriously. And in order to make sure that all of us can build AI systems or products in a responsible way.

Emily Wearmouth [00:30:48] Thank you. That was a mean question for me to throw at you. So thank you for tackling that one. So I'm really enjoying this conversation and I'm in no doubt at all that if we were cozily ensconced in the British pub it could probably go on for for many more hours. But our producer is currently waving at me, and that's my cue to to blow the whistle and try and wrap you guys up. So I'm going to try and summarize and feel free to interrupt if I if I'm doing so wrongly. I think it's fair to say that there are a lot of intertwined threads between the different ways the three of you have answered my question, but it also feels like there's a lot of consensus as well. And I think you all largely agree that we want people to stop using chatGPT, AI and ML as a synonym. So Neil, we all get behind your wish and we'll try work on that one. We also fell into the trap ourselves in this discussion. I don't know if you noticed to some extent I was referring to the same example, but we must do better. And we also, I think, largely agreed that it would be beneficial to have more explicit sort of under the bonnet or under the hood for the American listener detail behind this shiny AI labeling that people are putting on everything, whether in pursuit of press attention or high valuations, in order that organizations and users can get a better understanding of how much risk they should be applying in their assessments and what they can and shouldn't trust. And then I think the final point that we touched on throughout and came from your main answer, Suzanne, was that we would definitely benefit from more conversations around data ownership within the full AI supply chain, both with regards to what's coming in and what's coming out. So just building a greater understanding within society, businesses, individuals around what that ownership conversation looks like so people can make informed decisions. But just thinking through that list, we know what much do we? I mean, it's a very modest list.

Neil Thacker [00:32:36] It sounds easy.

Suzanne Oliver [00:32:37] Yeah, all solved. In one podcast.

Emily Wearmouth [00:32:41] Done they shoudl get us a more often. I thank you, all of you, for your time and for such an interesting and cross-functional, I suppose, conversation with all of you coming in with your perspective. And so to our listeners and I just want to say we'll catch you next time on Security Visionaries. Thank you.

Yihua Liao [00:33:00] Danke.

Neil Thacker [00:33:00] Vielen Dank an alle.

Suzanne Oliver [00:33:01] Danke.

Abonnieren Sie die Zukunft der Sicherheitstransformation

Mit dem Absenden dieses Formulars stimmen Sie unseren Nutzungsbedingungen zu und erkennen unsere Datenschutzerklärung an.