Can you
Spot fake AI news?

AI has had an enormous impact on the news landscape in 2023, with much of the noise serving to erode trust in the nascent technology. Between fake news about AI, and AI-generated fake news, media organisations worldwide - and their readers - have shown a huge appetite for the doubtful content.

To assess the scale of the issue, Netskope has ranked the most widespread fake AI news stories of 2023 so far, based on social views, engagement, articles, reach and authority, in order to determine their impact on each. This data has been juxtaposed with research showing the false confidence that the British and American public have in their ability to spot fake AI stories.

Fake AI-generated news stories are becoming more common

Alongside growing concerns around ChatGPT, and the increasingly asked question “Is ChatGPT secure?”, another worry emerging around AI is the effect AI has on fake news. The graph to the left shows how ‘AI fake news’ is a frequently trending subject on X (Twitter).

At Netskope, we’ve researched how far the most viral fake AI stories reached across news publications and social media, and how long it took for these stories to be revealed as fake.

AI fake news mention’s on Twitter

July 12 - Midjourney launch date
Oct 22 - Fake news during elections in Brazil
May 1 - Indian telephone scams using chat bots and AI

The top 15 AI-generated fake stories

We scored the most widespread fake news stories of 2023 so far, based on social views, engagement, articles, reach and authority, in order to determine their impact.

Taking the top spot is an image of Pope Francis wearing an oversized white puffer coat, one of the more light-hearted fake news stories we’ve encountered. This story of the Pope dressing in trendy streetwear was so believable and lovable that it racked up over 20 million social views and was covered by 312 media publications.

In second place are the fake AI images of Donald Trump being arrested in Downtown Washington DC in March this year. Although Donald Trump was later arrested in real life, these convincing images pre-empted the arrest and were created by the AI image generator, Midjourney. They went viral on X (Twitter) with over 10 million views, and were covered by 671 media publications – double the number of publications that covered the Pope story.

Artificial intelligence (AI) image generators like DALL-E and Midjourney are growing in popularity, claiming millions of new users so far. The easy-to-use programs allow anyone to create new images through text prompts, and the results are often so realistic that even news publishers struggle to differentiate between the fictional images and reality. These falsified images dominate our chart, with 14 out of 15 stories being driven by them.

In third place is the only non-AI generated image led story; the reported simulation of an AI drone killing its human operator. This story was covered by the highest number of publications, with 1,689 pieces of coverage. The average authority of the sites covering the story was lower, and as a result, the total combined publication reach was 1.3 million.

HeadlineSocial ViewsSocial ContentSocial EngagementsTotal CoverageEstimated ViewsCombined Publication AudienceCoverage EngagementPublication Authority Average
(out of 100)
Total Score
Image of Pope Francis wearing oversized white puffer coat is AI-generated20,800,00036491,9103126,610,0001,880,000,00049,20086
66
Images appearing to show Donald Trump arrest created by AI10,200,00040122,1356718,380,0001,860,000,00092,90086
63
Simulation of AI drone killing its human operator was hypothetical, Air Force saysN/A1,806162,4381,6891,330,000645,000,00012,50075
45
Pictures of Elon Musk's ‘robot wives’ are AI-generatedN/A183,951481,840,000721,000,00050,90071
22
Video of Hillary Clinton endorsing Ron DeSantis is AI-generated840,577,000279791484,44043,565,18021552
20
Video shows deepfake of Elon Musk10,500,00016030,624152453,000175,000,0003,44090
18
Image of Elon Musk and Mary Barra holding hands is AI-generated14,200,00065943215,00087,200,000991
16
Image purporting to show Donald Trump mugshot is AI-generated6,46740122,13557691,900105,000,00014078
11
‘Recent photo’ of Julian Assange was generated by AI5,300,000134039269,00088,100,00055475
10
Images of satanic-themed hotel are AI art; they do not show a business opening in Texas143,90014384247,000121,00020,60082
9
Images shared as first views of Titan sub debris field are fabricated 354,500184,432171,061,1801,101,211,30040873
9
Image of Tom Cruise with stunt doubles is AI-generated1,300,000411,59424757,000464,000,00037888
8
Target ‘satanic clothing’ collection is not real, it’s AI-generatedN/A621,17113475,000148,000,00050178
8
Video of Australian news interview featuring Bill Gates edited using AI41,100228289178,00078,700,00027286
6
Picture of Emmanuel Macron in front of fire is AI-generated55,000162,1764561,17030,735,0008287
6
Copy the code to embed this table into your website

The fake AI stories that took the longest to be corrected

It’s difficult to detect what is real and what is AI, so believing a made-up story is an increasingly common and relatable mistake. But which fake stories have convinced publishers the longest?

Analyzing the original publication date from the top key source to determine how long it took for the story to be taken down or corrected revealed that the average time it takes for a fake AI story to be corrected is 6 days. An AI-edited video of an Australian news interview featuring Bill Gates had publishers convinced the longest, at 15 days. The tampered video falsely suggests that Gates abruptly ended an interview with Australian Broadcasting Corporation (ABC) News journalist Sarah Ferguson after facing questions about his involvement in the COVID-19 vaccine distribution.

The next story that took the longest to be corrected was the simulation of an AI drone killing its human operator. At a summit, the chief of AI tests in the US Air Force, Colonel Tucker Hamilton, stated that the drone turned on its operator to prevent it from interfering with its mission. This story wasn’t a faked image or video but misinformation around AI capabilities. The time for this correction was drawn out due to the delayed response from the Air Force stating that the incident was hypothetical.

Live date
Date of correction
8 days
Headline
Video shows deepfake of Elon Musk
Live date: 27/12/2022
Date of correction: 04/01/2023
Days to Correction: 8
Source
13 days
Headline
Images of ‘spaceship’ in Antarctica were created by a digital artist
Live date: 15/02/2023
Date of correction: 28/02/2023
Days to Correction: 13
Source
8 days
Headline
Images of a ‘satanic fashion show’ were digitally created, not taken at New York Fashion Week
Live date: 21/02/2023
Date of correction: 01/03/2023
Days to Correction: 8
Source
4 days
Headline
Images of satanic-themed hotel are AI art; they do not show a business opening in Texas
Live date: 06/03/2023
Date of correction: 10/03/2023
Days to Correction: 4
Source
15 days
Headline
Video of Australian news interview featuring Bill Gates edited using AI
Live date: 08/03/2023
Date of correction: 23/03/2023
Days to Correction: 15
Source
0 days
Headline
Images appearing to show Donald Trump arrest created by AI
Live date: 21/03/2023
Date of correction: 21/03/2023
Days to Correction: 0
Source
1 day
Headline
Viral pictures of Pope Francis wearing puffer jacket are AI-generated
Live date: 25/03/2023
Date of correction: 26/03/2023
Days to Correction: 1
Source
4 days
Headline
Image of Pope Francis wearing oversized white puffer coat is AI-generated
Live date: 25/03/2023
Date of correction: 29/03/2023
Days to Correction: 4
Source
4 days
Headline
Image of Elon Musk and Mary Barra holding hands is AI-generated
Live date: 26/03/2023
Date of correction: 30/03/2023
Days to Correction: 4
Source
5 days
Headline
Screenshot showing Telegraph tweet on Pope Francis and AI is fabricated
Live date: 30/03/2023
Date of correction: 04/04/2023
Days to Correction: 5
Source
2 days
Headline
‘Recent photo’ of Julian Assange was generated by AI
Live date: 31/03/2023
Date of correction: 02/04/2023
Days to Correction: 2
Source
1 day
Headline
Image purporting to show Donald Trump mugshot is AI-generated
Live date: 04/04/2023
Date of correction: 05/04/2023
Days to Correction: 1
Source
12 days
Headline
Images purporting to show ancient alien artifacts unearthed during World War Two are AI-generated
Live date: 05/04/2023
Date of correction: 17/04/2023
Days to Correction: 12
Source
6 days
Headline
Video of Hillary Clinton endorsing Ron DeSantis is AI-generated
Live date: 11/04/2023
Date of correction: 17/04/2023
Days to Correction: 6
Source
4 days
Headline
Images of eggs hanging from trees are AI-generated
Live date: 01/05/2023
Date of correction: 05/05/2023
Days to Correction: 4
Source
4 days
Headline
Images of purported ‘satanic teachings’ in public libraries are AI-generated
Live date: 04/05/2023
Date of correction: 08/05/2023
Days to Correction: 4
Source
12 days
Headline
Photos of Prince Harry and Prince William together at the coronation are AI-generated
Live date: 05/05/2023
Date of correction: 17/05/2023
Days to Correction: 12
Source
2 days
Headline
Pictures of Elon Musk's ‘robot wives’ are AI-generated
Live date: 20/05/2023
Date of correction: 22/05/2023
Days to Correction: 2
Source
13 days
Headline
Simulation of AI drone killing its human operator was hypothetical, Air Force says
Live date: 26/05/2023
Date of correction: 08/06/2023
Days to Correction: 13
Source
5 days
Headline
Target ‘satanic clothing’ collection is not real, it’s AI-generated
Live date: 26/05/2023
Date of correction: 31/05/2023
Days to Correction: 5
Source
7 days
Headline
Image of Tom Cruise with stunt doubles is AI-generated
Live date: 07/06/2023
Date of correction: 14/06/2023
Days to Correction: 7
Source
3 días
Headline
Images falsely claim to show Titanic sub debris
Live date: 22/06/2023
Date of correction: 25/06/2023
Days to Correction: 3
Source
4 days
Headline
Images falsely claim to show Titanic sub debris
Live date: 22/06/2023
Date of correction: 26/06/2023
Days to Correction: 4
Source
10 days
Headline
1883 ‘photograph’ of bigfoot or sasquatch is AI-generated
Live date: 27/06/2023
Date of correction: 07/07/2023
Days to Correction: 10
Source
4 days
Headline
Picture of Emmanuel Macron in front of fire is AI-generated
Live date: 01/07/2023
Date of correction: 05/07/2023
Days to Correction: 4
Source
December
2022
January
2023
February
March
April
May
June
July
Copy the code to embed this timeline into your website

Is the public able to detect a fake AI news story?

We surveyed over 1,000 members of the US public, and a further 500 in the UK over the age of 18 - all of whom specified they used social media and had an interest in the news - to learn more about the public’s confidence in dealing with fake AI-generated stories. When asked what their most trusted platform is for news stories, newspapers and tabloids were voted top. Surprisingly, video-based social platforms like TikTok and Snapchat came second, demonstrating their huge influence and the importance of regulating misinformation on these platforms.

Traditional social media platforms such as Instagram, Twitter, and Facebook came in last place. This demonstrates the public’s awareness of the unreliability of information found here.

Over 80% of UK research participants claimed they are confident at spotting a fake news story, and the US participants were even more sure of their abilities at 88%. However, when shown a fake AI news story alongside a real one, 44% of UK respondents believed the fake story to be real. This percentage was even higher for US citizens with half answering incorrectly.

We broke down the data by region in the UK and the US to identify whether any regions are more vulnerable to AI-generated fakes.

In the UK, despite its reputation for worldly cynicism Greater London struggled the most in identifying the AI-generated fake news (64% got the answer wrong) had the most incorrect answers at 64%. In the South East of England, scores were almost reversed, with 63% correctly spotting the fake.

Arkansas was crowned the most susceptible US state, with 67% picking the wrong story, followed by North Carolina with 62%. In contrast, only 20% of those in Indiana answered incorrectly, crowning the state as the savviest to AI fake news.

The platforms both the UK and US trust the most to publish true news stories
1
Newspaper and tabloids
2
Video based social platforms (tiktok, snapchat etc.)
3
Community forums (Reddit, Quora etc.)
4
podcasts
5
Social media platforms (Facebook, Instagram, Twitter etc.)
UK survey results
% claiming they are fair or better at identifying a fake news story?
84%
% that selected the fake story as the real one
El 50 %
The regions most susceptible to AI misinformation in the UK
% that selected the fake story as the real one
1. Greater London
63.6%
2. East Midlands
62.5%
3. North East
61.5%
4. West Midlands
53.7%
5. Yorkshire & the Humber
El 50 %
6. Wales
El 50 %
7. North West
45.3%
8. East of England
45.2%
9. South West
43.9%
10. Scotland
41%
11. South East
37.1%
Northern Ireland has been excluded due to lack of respondents
US survey results
% claiming they are fair or better at identifying a fake news story?
88%
% that selected the fake story as the real one
44%
The states most susceptible to AI misinformation in the US
% that selected the fake story as the real one
1. Arkansas
66.7%
2. North Carolina
61.9%
3. Nevada
60%
4. California
57.5%
5. Utah
55.6%
6. Missouri
54.5%
7. Arizona
53.3%
8. Pennsylvania
52.5%
9. Washington
El 50 %
10. Alabama
El 50 %
11. South Carolina
El 50 %
12. New York
48.8%
13. South Dakota
47.1%
14. Georgia
46.2%
15. Louisiana
44.4%
16. Florida
44.3%
17. Mississippi
42.9%
18. Michigan
42.4%
19. Ohio
42.2%
20. Colorado
41.7%
21. Texas
40.9%
22. Maryland
40%
23. New Jersey
39.5%
24. Tennessee
38.5%
25. Virginia
36.8%
26. Massachusetts
33.3%
27. Kentucky
31.8%
28. Wisconsin
30.4%
29. Illinois
28.6%
30. Minnesota
27.3%
31. Oklahoma
27.3%
32. Indiana
20%
Some states have been excluded due to lack of respondents

Why are fake AI-generated stories dangerous?

While falsified images have been around for as long as photography, it is the often nefarious objective that is new in the modern era. While claims to have photographed fairies in the garden, or a monster in Loch Ness harmed noone, cyber criminals and political groups are using AI-generated images today to influence public opinion and even extort money from victims. Cyber criminals are already using fake AI-generated images and content to trick people into handing over personal information and company secrets, and even to convince people that their loved ones have been kidnapped, prompting ransom payments.

How to spot a fake AI news story

With AI advancing quickly, it can seem like an impossible task to differentiate between the truth and AI-fakery, and a zero trust policy will help ensure you verify before you trust anything. Here are our top tips for preventing being fooled.

1
Try and find the original source of the story

If you see an AI-related headline claiming outlandish advancements, or an image or video that seems unlikely, first check the source. If it’s on social media, the comments could hold information about where it originated. For images, you could conduct a reverse image search using Google Reverse Image Search, TinEye or Yandex. Searching for the origin can reveal further context and existing fact checks by trustworthy publishers.

2
For image-based stories:

Enlarge the image and check for errors. Making the image bigger will reveal poor quality or incorrect details – both telltale signs of an AI-generated image.

Check the image’s proportions. frequent mistake in AI images is the proportions and quantities of body parts and other objects. Hands, fingers, teeth, ears and glasses are often deformed and have the wrong quantities.

Is there anything odd about the background? If an image has been altered or fabricated, items used in the background are often deformed, used multiple times and lacking in detail.

Does the image look smooth or lack imperfections? With AI images, certain aspects that would normally be highly detailed are often smoothed and perfected, like skin, hair and teeth. The faces are flawless and the textiles are too harmonious.

Do the details look correct? Inconsistencies in the logic of the image can be a telltale sign of AI generation. Perhaps the coloring of something as small as the eyes doesn’t match across different images, or a pattern may change slightly?

3
For video-based stories:

Is the video image small? Like with images, a small video image indicates ‘deepfake’ AI software has been used that can only deliver a fake video at a very low resolution.

Are the subtitles oddly placed? With fake videos, the subtitles are often strategically placed to cover faces, meaning it’s harder for viewers to notice the unconvincing video deepfake, where the audio often doesn’t match the lip movements.

Are there misaligned lip shapes present? Misaligned lip shapes present in the frames are a telltale sign of manipulation, with the boundary of spliced region visible near the middle of the mouths.

Following these tips will help you identify falsified stories. However, as AI advances, newer versions of programs like Midjourney are becoming better at generating higher-quality images. This means users may not be able to rely on spotting these kinds of mistakes for much longer.

Is the AI hype more fakery?

We have discussed the impact of fake AI generated images and stories, but do you know what AI is capable of? See how savvy you are at spotting fake AI development news with this quiz. Containing 12 real and fake AI development headlines, try to spot which is true and which is false.

Can you tell the difference? Can you tell the difference?
Story 1 / 12
AI can now make you immortal
Screenshot of the news story
True
False
Correct
Incorrect
Story 2 / 12
AI predicts if you are going to quit your job
Screenshot of the news story
True
False
Correct
Incorrect
Story 3 / 12
AI app lets you taste your favourite meals with VR
Screenshot of the news story
True
False
Correct
Incorrect
Story 4 / 12
Here’s How AI Can Predict Hit Songs With Frightening Accuracy
Screenshot of the news story
True
False
Correct
Incorrect
Story 5 / 12
AI can now accurately predict the weather 2 months in advance
Screenshot of the news story
True
False
Correct
Incorrect
Story 6 / 12
How AI is pinpointing cancer and saving lives in a Scottish hospital
Screenshot of the news story
True
False
Correct
Incorrect
Story 7 / 12
AI has now unclocked the ability to read peoples minds
Screenshot of the news story
True
False
Correct
Incorrect
Story 8 / 12
New AI can guess whether you are gay or straight from a photograph
Screenshot of the news story
True
False
Correct
Incorrect
Story 9 / 12
AI can now crack most passwords in less than a minute
Screenshot of the news story
True
False
Correct
Incorrect
Story 10 / 12
New AI can now predict the best job for your child from 3 years old
Screenshot of the news story
True
False
Correct
Incorrect
Story 11 / 12
Build Your Own Perfect AI Girlfriend
Screenshot of the news story
True
False
Correct
Incorrect
Story 12 / 12
Meet the AI influencers ALREADY making millions from mega deals with fashion giants
Screenshot of the news story
True
False
Correct
Incorrect
You scored... /12
Share your results
Facebook X LinkedIn
Copy the code to embed this quiz into your website

Netskope SkopeAI for GenAI is a stand-alone solution that can be added to any security stack, allowing IT teams to safely enable the use of generative AI apps such as ChatGPT and Bard, with application access control, real-time user coaching, and best-in-class data protection.

Our SASE architecture combines market-leading Intelligent Security Service Edge with next-generation Borderless SD-WAN to provide a cloud-native, fully-converged solution that supports zero trust principles. Netskope Zero Trust Network Access (ZTNA) offers fast, secure, and direct access to private applications hosted anywhere, to reduce risk, simplify IT and optimize the user experience.

Methodology

The stories analysed were gathered and collaborated from a series of fact checking sites such as Snopes, Reuters and Channel 4.

Buzzsumo and Brandwatch were used to identify the source of misinformation determined by fact checkers to consistently measure reach of content.

Google news was used to gather indexed coverage from the web to extrapolate viewing estimates.

Coveragebook was used to determine estimated audiences, social engagement and linked coverage relating to selected AI fake-news.

Facebook, X (Twitter) & Archive.is were used to gather AI-fake news content.

The opinion poll was conducted on 501 UK respondents, 1005 US Respondents who specified they used social media and had an interest in news, aged 18+.