閉める
閉める
明日に向けたネットワーク
明日に向けたネットワーク
サポートするアプリケーションとユーザー向けに設計された、より高速で、より安全で、回復力のあるネットワークへの道を計画します。
          Netskopeを体験しませんか?
          Netskopeプラットフォームを実際に体験する
          Netskope Oneのシングルクラウドプラットフォームを直接体験するチャンスです。自分のペースで進められるハンズオンラボにサインアップしたり、毎月のライブ製品デモに参加したり、Netskope Private Accessの無料試乗に参加したり、インストラクター主導のライブワークショップに参加したりできます。
            SSEのリーダー。 現在、シングルベンダーSASEのリーダーです。
            SSEのリーダー。 現在、シングルベンダーSASEのリーダーです。
            Netskope、2024年ガートナー、シングルベンダーSASEのマジック・クアドラントでリーダーの1社の位置付けと評価された理由をご確認ください。
              ダミーのためのジェネレーティブAIの保護
              ダミーのためのジェネレーティブAIの保護
              ジェネレーティブ AI の革新的な可能性と堅牢なデータ セキュリティ プラクティスのバランスを取る方法をご覧ください。
                ダミーのための最新のデータ損失防止(DLP)eBook
                最新の情報漏えい対策(DLP)for Dummies
                クラウド配信型 DLP に移行するためのヒントとコツをご紹介します。
                  SASEダミーのための最新のSD-WAN ブック
                  SASEダミーのための最新のSD-WAN
                  遊ぶのをやめる ネットワークアーキテクチャに追いつく
                    リスクがどこにあるかを理解する
                    Advanced Analytics は、セキュリティ運用チームがデータ主導のインサイトを適用してより優れたポリシーを実装する方法を変革します。 Advanced Analyticsを使用すると、傾向を特定し、懸念事項に的を絞って、データを使用してアクションを実行できます。
                        レガシーVPNを完全に置き換えるための6つの最も説得力のあるユースケース
                        レガシーVPNを完全に置き換えるための6つの最も説得力のあるユースケース
                        Netskope One Private Accessは、VPNを永久に廃止できる唯一のソリューションです。
                          Colgate-Palmoliveは、スマートで適応性のあるデータ保護により「知的財産」を保護します
                          Colgate-Palmoliveは、スマートで適応性のあるデータ保護により「知的財産」を保護します
                            Netskope GovCloud
                            NetskopeがFedRAMPの高認証を達成
                            政府機関の変革を加速するには、Netskope GovCloud を選択してください。
                              一緒に素晴らしいことをしましょう
                              Netskopeのパートナー中心の市場開拓戦略により、パートナーは企業のセキュリティを変革しながら、成長と収益性を最大化できます。
                                Netskopeソリューション
                                Netskope Cloud Exchange
                                Netskope Cloud Exchange(CE)は、セキュリティ体制全体で投資を活用するための強力な統合ツールをお客様に提供します。
                                  Netskopeテクニカルサポート
                                  Netskopeテクニカルサポート
                                  クラウドセキュリティ、ネットワーキング、仮想化、コンテンツ配信、ソフトウェア開発など、多様なバックグラウンドを持つ全世界にいる有資格のサポートエンジニアが、タイムリーで質の高い技術支援を行っています。
                                    Netskopeの動画
                                    Netskopeトレーニング
                                    Netskopeのトレーニングは、クラウドセキュリティのエキスパートになるためのステップアップに活用できます。Netskopeは、お客様のデジタルトランスフォーメーションの取り組みにおける安全確保、そしてクラウド、Web、プライベートアプリケーションを最大限に活用するためのお手伝いをいたします。

                                      Combating Misinformation and Deep Fakes in Elections and Business: Q&A with David Fairman & Shamla Naidoo

                                      Jul 26 2024

                                      Technological advances in how we create and consume media have repeatedly transformed how election campaigns are fought: social media, TV and radio were all revolutions in their times.There have always been concerns about the impact these new technologies would have on democracy: the Milwaukee Journal worried following the first televised presidential debate, in 1960, that “American Presidential campaigning will never be the same again.” Perhaps they were right…

                                      It’s clear 2024 will be remembered as a big year for democracy, with elections in 64 countries, as well as in the European Union and the latest disruptive tech is AI. This year we are seeing an increased use of generative AI to create deep fakes–videos that look and sound real, but have in fact been artificially created, often to spread either misinformation or disinformation. Deep fakes are powerful tools and with AI technology rapidly evolving–and access to them expanding–there are clear potential dangers not just for democratic decision making, but for consumers and businesses too.

                                      With that in mind, I (a Brit) sat down with our in-house experts David Fairman (an Australian and our APAC CIO) and Shamla Naidoo (a South African American, and CXO Advisor), to hear their thoughts on the potential security issues driven by tech during these global elections, what these technological developments could mean for people and enterprises, and how we can protect ourselves as individuals.

                                      Emily: David, kick us off, what are deep fakes and how are they being deployed?

                                      David: Deep fakes are images, video, and audio that have been created by generative artificial intelligence which manipulate a given person’s likeness to make them appear to do or say something which in reality they did not. Not only do they spread misinformation and lies, they’re also increasingly easy and cheap to make. All you need is the right software and source materials such as publicly available images and videos of a person to inform the AI tool. 

                                      In the case of politicians, this material is very easy to source, but increasingly business leaders, celebrities and frankly anyone who uses social media could be a deepfake victim. Today, as we mostly consume media via short quickly-absorbed videos, deepfakes can be very convincing, especially to the untrained eye. As technology continues to evolve, it will become even more difficult to distinguish them and say what content is real and what isn’t. 

                                      Emily: Shamla, how are bad actors using them during elections?

                                      Shamla: We are seeing deep fakes across many democracies in the run up to elections, in particular during the often emotionally charged campaign periods, and the ultimate goal is to influence our voting decisions. All of our decisions, whether we are conscious of it or not, are influenced by what we see and hear on a daily basis. And in an election campaign a deep fake piece of content relating to a key topic can affect our decisions about who we vote for and ultimately who ends up in power.

                                      Because deep fakes are cheap and fairly simple to make, they present a huge opportunity for bad actors who have an interest in a population voting (or not voting) for a particular candidate.

                                      Emily: How are deep fakes being handled by the victims in the political world?

                                      Shamla: Although they are increasingly common, deep fakes are being handled case by case in most democracies, viewed fundamentally as a reputation management issue for candidates, because the misinformation is often personal and scandalous in nature. This perhaps downplays the power of these campaigns because each one is being created with a wider goal in mind. 

                                      The more prolific deep fakes become (and in my eyes it is inevitable they will become more common), the more we will have to educate and put systems in place to ensure people and platforms double check information – and sources.

                                      Emily: What about in business, David, how are deep fakes posing a threat to business, and how are organisations handling the early incidents that are being reported?

                                      David: Well this is a very new threat for businesses, but a challenge that will grow in significance. I believe businesses will have to guard against the use of deepfakes to impersonate senior executives within a business. Social engineering attacks – in which criminals impersonate someone, often an authority or trusted figure, in order to trick people into transferring money, granting access, or transferring information – are already a danger. As AI technology develops, it’ll become even harder to differentiate a real call from a senior executive from a fake one, and so the potential for bad actors to dupe unsuspecting victims will be much greater. 

                                      Emily: This all sounds pretty doom and gloom. But hopefully it isn’t. Is it? Please tell me it isn’t…

                                      Shamla: The fact that it’s so difficult to know what has been AI generated and what hasn’t makes it difficult for individual consumers to know what to do beyond proceeding with caution! 

                                      But thankfully, it’s not all as bad as it seems. We’re starting to see social media platforms labelling content that has been artificially created so we can know whether or not the content is real or not before we share or ‘like’ it. 

                                      That said, we can always aim to do more. I think that we need to start seeing governments take serious next steps in implementing AI legislation – similar to the approach the European Union has taken with their Artificial Intelligence Act, and introduce legally binding requirements that will mitigate AI risks. Only then will we be able to control the use of AI in this context and the risks associated with it. 

                                      Emily: David, and what about in businesses? 

                                      David: As I mentioned, one particularly dangerous way of using deep fakes are social engineering attacks. However, if companies find themselves in situations where they don’t 100% know if they’ve been attacked or not, then they should establish a so-called call-back procedure which enables employees to double check whether there’s a criminal behind an attack or whether the request (in the case that a trusted figure requested a particular thing) is legitimate. 

                                      The advantage of this procedure is that it doesn’t rely on being able to spot that the audio or video is fake, just a sense that the message is unusual or asking you to do something out of the ordinary. 

                                      Emily: Ah, this is all incredibly useful! Thank you both for your time and for the valuable insight. 


                                      While deep fakes are just the latest technological innovation to challenge election processes and fairness, (and potentially impact businesses), they feel different in both the speed at which they are evolving and improving, and in the extent to which they are putting potentially significant influence in the hands of anonymous bad actors. With elections around the world underway, let’s make sure we double check where our information comes from and whether we can trust it. 

                                      For more information on elections, disinformation, and security, check out our podcast with Shamla here, or wherever you listen to your podcasts.

                                      author image
                                      Emily Wearmouth
                                      Emily Wearmouth is a technology communicator who helps engineers, specialists and tech organizations to communicate more effectively.
                                      Emily Wearmouth is a technology communicator who helps engineers, specialists and tech organizations to communicate more effectively.

                                      Stay informed!

                                      Subscribe for the latest from the Netskope Blog