chiudere
chiudere
La tua rete di domani
La tua rete di domani
Pianifica il tuo percorso verso una rete più veloce, sicura e resiliente, progettata per le applicazioni e gli utenti che supporti.
            Experience Netskope
            Prova direttamente la piattaforma Netskope
            Ecco la tua occasione per sperimentare in prima persona la piattaforma single-cloud di Netskope One. Iscriviti a laboratori pratici e a ritmo autonomo, unisciti a noi per dimostrazioni mensili di prodotti dal vivo, fai un test drive gratuito di Netskope Private Access o partecipa a workshop dal vivo guidati da istruttori.
              Un leader in SSE. Ora è un leader nel settore SASE a singolo fornitore.
              Netskope è riconosciuto come Leader Più Lontano in Visione sia per le piattaforme SSE che SASE
              2 volte leader nel Quadrante Magico di Gartner® per piattaforme SASE
              Una piattaforma unificata costruita per il tuo percorso
                ""
                Netskope One AI Security
                Le organizzazioni hanno bisogno di un'IA sicura per far progredire il loro business, ma controlli e guardrail non devono richiedere sacrifici in velocità o esperienza utente. Netskope può aiutarti a dire sì al vantaggio dell'IA.
                  ""
                  Netskope One AI Security
                  Le organizzazioni hanno bisogno di un'IA sicura per far progredire il loro business, ma controlli e guardrail non devono richiedere sacrifici in velocità o esperienza utente. Netskope può aiutarti a dire sì al vantaggio dell'IA.
                    eBook sulla Modern Data Loss Prevention (DLP) for Dummies
                    Modern Data Loss Prevention (DLP) for Dummies
                    Ricevi consigli e trucchi per passare a un DLP fornito dal cloud.
                      Modern SD-WAN for SASE Dummies Book
                      Modern SD-WAN for SASE Dummies
                      Smettila di inseguire la tua architettura di rete
                        Comprendere dove risiede il rischio
                        Advanced Analytics trasforma il modo in cui i team di operazioni di sicurezza applicano insight basati sui dati per implementare policy migliori. Con l'Advanced Analytics, puoi identificare tendenze, concentrarti sulle aree di interesse e utilizzare i dati per agire.
                            Supporto tecnico Netskope
                            Supporto tecnico Netskope
                            I nostri ingegneri di supporto qualificati sono dislocati in tutto il mondo e possiedono competenze diversificate in sicurezza cloud, networking, virtualizzazione, content delivery e sviluppo software, garantendo un'assistenza tecnica tempestiva e di qualità.
                              Video Netskope
                              Formazione Netskope
                              La formazione Netskope ti aiuterà a diventare un esperto di sicurezza cloud. Siamo qui per aiutarti a proteggere il tuo percorso di trasformazione digitale e a sfruttare al meglio le tue applicazioni cloud, web e private.

                                Combating Misinformation and Deep Fakes in Elections and Business: Q&A with David Fairman & Shamla Naidoo

                                Jul 26 2024

                                Technological advances in how we create and consume media have repeatedly transformed how election campaigns are fought: social media, TV and radio were all revolutions in their times.There have always been concerns about the impact these new technologies would have on democracy: the Milwaukee Journal worried following the first televised presidential debate, in 1960, that “American Presidential campaigning will never be the same again.” Perhaps they were right…

                                It’s clear 2024 will be remembered as a big year for democracy, with elections in 64 countries, as well as in the European Union and the latest disruptive tech is AI. This year we are seeing an increased use of generative AI to create deep fakes–videos that look and sound real, but have in fact been artificially created, often to spread either misinformation or disinformation. Deep fakes are powerful tools and with AI technology rapidly evolving–and access to them expanding–there are clear potential dangers not just for democratic decision making, but for consumers and businesses too.

                                With that in mind, I (a Brit) sat down with our in-house experts David Fairman (an Australian and our APAC CIO) and Shamla Naidoo (a South African American, and CXO Advisor), to hear their thoughts on the potential security issues driven by tech during these global elections, what these technological developments could mean for people and enterprises, and how we can protect ourselves as individuals.

                                Emily: David, kick us off, what are deep fakes and how are they being deployed?

                                David: Deep fakes are images, video, and audio that have been created by generative artificial intelligence which manipulate a given person’s likeness to make them appear to do or say something which in reality they did not. Not only do they spread misinformation and lies, they’re also increasingly easy and cheap to make. All you need is the right software and source materials such as publicly available images and videos of a person to inform the AI tool. 

                                In the case of politicians, this material is very easy to source, but increasingly business leaders, celebrities and frankly anyone who uses social media could be a deepfake victim. Today, as we mostly consume media via short quickly-absorbed videos, deepfakes can be very convincing, especially to the untrained eye. As technology continues to evolve, it will become even more difficult to distinguish them and say what content is real and what isn’t. 

                                Emily: Shamla, how are bad actors using them during elections?

                                Shamla: We are seeing deep fakes across many democracies in the run up to elections, in particular during the often emotionally charged campaign periods, and the ultimate goal is to influence our voting decisions. All of our decisions, whether we are conscious of it or not, are influenced by what we see and hear on a daily basis. And in an election campaign a deep fake piece of content relating to a key topic can affect our decisions about who we vote for and ultimately who ends up in power.

                                Because deep fakes are cheap and fairly simple to make, they present a huge opportunity for bad actors who have an interest in a population voting (or not voting) for a particular candidate.

                                Emily: How are deep fakes being handled by the victims in the political world?

                                Shamla: Although they are increasingly common, deep fakes are being handled case by case in most democracies, viewed fundamentally as a reputation management issue for candidates, because the misinformation is often personal and scandalous in nature. This perhaps downplays the power of these campaigns because each one is being created with a wider goal in mind. 

                                The more prolific deep fakes become (and in my eyes it is inevitable they will become more common), the more we will have to educate and put systems in place to ensure people and platforms double check information – and sources.

                                Emily: What about in business, David, how are deep fakes posing a threat to business, and how are organisations handling the early incidents that are being reported?

                                David: Well this is a very new threat for businesses, but a challenge that will grow in significance. I believe businesses will have to guard against the use of deepfakes to impersonate senior executives within a business. Social engineering attacks – in which criminals impersonate someone, often an authority or trusted figure, in order to trick people into transferring money, granting access, or transferring information – are already a danger. As AI technology develops, it’ll become even harder to differentiate a real call from a senior executive from a fake one, and so the potential for bad actors to dupe unsuspecting victims will be much greater. 

                                Emily: This all sounds pretty doom and gloom. But hopefully it isn’t. Is it? Please tell me it isn’t…

                                Shamla: The fact that it’s so difficult to know what has been AI generated and what hasn’t makes it difficult for individual consumers to know what to do beyond proceeding with caution! 

                                But thankfully, it’s not all as bad as it seems. We’re starting to see social media platforms labelling content that has been artificially created so we can know whether or not the content is real or not before we share or ‘like’ it. 

                                That said, we can always aim to do more. I think that we need to start seeing governments take serious next steps in implementing AI legislation – similar to the approach the European Union has taken with their Artificial Intelligence Act, and introduce legally binding requirements that will mitigate AI risks. Only then will we be able to control the use of AI in this context and the risks associated with it. 

                                Emily: David, and what about in businesses? 

                                David: As I mentioned, one particularly dangerous way of using deep fakes are social engineering attacks. However, if companies find themselves in situations where they don’t 100% know if they’ve been attacked or not, then they should establish a so-called call-back procedure which enables employees to double check whether there’s a criminal behind an attack or whether the request (in the case that a trusted figure requested a particular thing) is legitimate. 

                                The advantage of this procedure is that it doesn’t rely on being able to spot that the audio or video is fake, just a sense that the message is unusual or asking you to do something out of the ordinary. 

                                Emily: Ah, this is all incredibly useful! Thank you both for your time and for the valuable insight. 


                                While deep fakes are just the latest technological innovation to challenge election processes and fairness, (and potentially impact businesses), they feel different in both the speed at which they are evolving and improving, and in the extent to which they are putting potentially significant influence in the hands of anonymous bad actors. With elections around the world underway, let’s make sure we double check where our information comes from and whether we can trust it. 

                                For more information on elections, disinformation, and security, check out our podcast with Shamla here, or wherever you listen to your podcasts.

                                author image
                                Emily Wearmouth
                                Emily Wearmouth is a technology communicator who helps engineers, specialists and tech organizations to communicate more effectively.
                                Emily Wearmouth is a technology communicator who helps engineers, specialists and tech organizations to communicate more effectively.
                                Connettiti con Netskope

                                Iscriviti al blog di Netskope

                                Iscriviti per ricevere ogni mese una panoramica degli ultimi contenuti di Netskope direttamente nella tua casella di posta.