Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p
— Andy Campbell (@AndyBCampbell) May 22, 2023
All in all, the hoax — the latest in a string of AI-generated images to fool some social media users — appears to have done little immediate damage. Twitter has since suspended the Bloomberg Feed account, which was not related to the real Bloomberg media organization, and within about 20 minutes, local authorities had debunked the report.
“Just looking at the image itself, that’s not the Pentagon,” said Nate Hiner, a captain with the fire department in Arlington, Va., where the Pentagon is located. “I have no idea what that building is. There’s no building that looks like that in Arlington.”
Yet the mechanisms involved, from the image’s amplification by large propaganda accounts to the almost instantaneous response from the stock market, suggest the potential for more such mischief if AI tools continue to make inroads in fields such as social media moderation, news writing, and stock trading.
And Twitter is looking like an increasingly likely vector, as new owner Elon Musk has gutted its human workforce, laid off a team that used to fact-check viral trends, and changed account verification from a manual authentication process to one that’s largely automated and pay-for-play.
With experts predicting that AI will impact millions of human jobs, the concern becomes not just whether AI-generated misinformation might mislead people, but whether it might mislead its fellow automated systems.
“This isn’t an AI issue, per se,” said Renee DiResta, research manager at Stanford Internet Observatory and an expert on how misinformation circulates. “Anyone with Photoshop experience could have made that image — ironically, could probably have done it better. But it’s a look at how signals that help people decide whether information about breaking news is trustworthy on Twitter have been rendered useless, just as the capacity to create high-resolution unreality has been made available to everyone.”
Verified accounts spread the news
One of the first accounts to post about the fake event is known as “Walter Bloomberg.” It tweeted at 10:06 a.m.: “Large Explosion near The Pentagon Complex in Washington D.C. – Initial Report.” That tweet did not include any images, just text.
The nine-year-old account, which has no known relation to the fast-moving Bloomberg terminals it tries to emulate, has more than 650,000 followers and posts short headlines and story links. The false tweet had been viewed more than 730,000 times as of 1:50 p.m. EDT, Twitter data show.
It’s unclear where the account got the initial report, and messages sent to the owner’s Twitter and Discord accounts did not bring a response. But by 1:50 p.m. EDT, the false tweet had been viewed more than 730,000 times, Twitter data show.
In the next few minutes, other accounts posted similarly false reports. At 10:06 a.m., a 386,000-follower account with the handle @financialjuice, named “Breaking Market News,” tweeted, “INITIAL REPORTS OF A LARGE EXPLOSION NEAR THE PENTAGON COMPLEX IN WASHINGTON DC – TWITTER SOURCES.” At 10:08 a.m., @CheddarFlow, a stock-market-news account with 150,000 followers, tweeted, “Large explosion near the Pentagon complex in Washington D.C. – initial report.” Many others followed, according to a review of Twitter posts.
At 10:11 a.m., an account called @BloombergFeed, which is also unrelated to the real Bloomberg, posted the false report but added a new twist: a fake image showing a large plume of smoke next to what looks like a government building. The building in the photograph looks little like the Pentagon, but it bears some the hallmarks of being AI-generated.
Since being created in August, @BloombergFeed has tweeted 224,000 times, including sometimes thousands of tweets a day, according to Social Blade, a social media analytics site. It frequently retweeted posts from the real Bloomberg. Yet is had fewer than 1,000 followers, and it’s unclear who ran it or why. Twitter suspended the account.
At 10:17 a.m., the Walter Bloomberg account tweeted that the “Twitter account that reported about explosion near the Pentagon has deleted the tweet” but did not name the account. At 10:24 a.m., it tweeted that a Pentagon spokesperson said there had been no explosion. Those tweets were viewed hundreds of thousands of times less than its initial false report.
Some of the accounts that tweeted about the fake event had blue “verified” checkmarks, while legitimate organizations that shared the truth did not. The official account for the Pentagon Force Protection Agency, which polices the Pentagon, doesn’t pay for a blue checkmark and Twitter has not given it a gray checkmark indicating it’s a verified institution. The agency retweeted a local law-enforcement message saying there was “NO explosion” at 10:27 a.m.; the tweet had only 78,000 views as of 4 p.m.
Twitter did not respond to a request for comment.
Local authorities scramble
Hiner, the Arlington Fire captain who handles the Northern Virginia department’s emergency communications, said it took about five minutes for him to realize the reports on Twitter were fake.
At 10:10 a.m., Hiner was in a meeting when he got the first call. He stepped out of the meeting to investigate.
The first sign something was off? He had not received any alerts from the department’s emergency software, First Due, which monitors dispatch and sends him a push notification when first responders are sent out for major incidents like fires.
Next, he checked his mobile data terminal — essentially a laptop that lists every active 911 incident in Arlington — and found no sign of anything going on near the Pentagon.
“There were no medical calls, no fire calls, no incidents whatsoever,” he said.
That’s when he finally pulled up social media himself, expecting to see some eyewitness accounts on Twitter. But again, there was nothing. All he saw was the doctored photo of the explosion.
Five ways to spot false AI images — and why you shouldn’t freak out about them.
At that point, he reached out to spokesmen at the Department of Defense and at the Pentagon Force Protection Agency. By 10:27 a.m., he’d posted on Arlington Fire’s Twitter account that the reports were false.
“There is NO explosion or incident taking place at or near the Pentagon reservation,” the tweet said, “and there is no immediate danger or hazards to the public.”
Hiner said that he sometimes receives odd inquiries from Arlington residents after seeing a fire truck in their neighborhood or gets misguided calls based on scanner traffic. But he cannot recall another time, he said, “in which an emergency incident was being reported on social media that was just 100 percent inaccurate.”
New twist on an old problem
From Photoshopped images of a shark on a highway during Hurricane Sandy to false reports of celebrity deaths, viral lies are nothing new on Twitter. Generative AI tools, from chatbots such as ChatGPT that can pen fake news stories to AI art tools such as Midjourney and Stable Diffusion, are only the newest tools in the hoaxsters’ kit. They’ve been used in recent months to create other viral images, including one that appeared to show Donald Trump getting arrested and another depicting Pope Francis making a fashion statement.
For the most part, mainstream media outlets have successfully refuted the misinformation, and the world has marched on as before. Still, some hoaxes have wrought chaos, to varying degrees. In 2013, a fake tweet about an attack on the White House touched off a quick drop in financial markets.
Over time, social media users and the news media have learned to turn a skeptical eye on viral reports, especially from unverified sources. But Twitter’s new verification system means that the blue check mark, once a visual shortcut that conveyed a modicum of authority on an account, no longer serves that function.
Sam Gregory, the executive director of the human rights organization Witness, whose group has studied fake images and disinformation, said the Pentagon explosion image tweeted Monday carries multiple hallmarks of a fake, including visual glitches and an inaccurate view of the Pentagon. The challenge with such fakes, Gregory said, is the speed with which they can blast across the internet.
“These circulate rapidly, and the ability to do that fact-check or debunk or verification at a institutional level moves slower and doesn’t reach the same people,” he said.
Though the image may be obviously fake to some, the fact that it was attached to an authoritative-sounding claim made it that much more likely to gain attention, Gregory added.
“The way people are exposed to these shallow fakes, it doesn’t require something to look exactly like something else for it to get attention,” he said. “People will readily take and share things that don’t look exactly right but feel right.”
As for why the fakes were shared, it’s unclear. Some fakes have been shared to score political points, while others have been used to troll or build an audience that the account may hope to monetize.
“Sometimes they’re doing it maliciously, or sometimes they’re just doing it to get a lot of views,” he said. “You can get a lot of audience very quickly from this, and that is a powerful drug.”
Arlington County Board Chair Christian Dorsey (D) said local governments like Arlington’s face an increasingly steep challenge in responding to misinformation as AI makes it easier to rapidly generate plausible fakes. He said officials try to guide residents to follow local authorities on Twitter and turn to them for reliable information rather than “some random Twitter handle.” Arlington County, and its police and fire/EMS departments are all verified with a “silver check” on Twitter, indicating that they’re government-run accounts.
But he recognizes that may not be enough.
“Our number of followers pales in comparison to some of the most popular social media accounts out there. You always run the risk that they’re not going to penetrate as deeply,” Dorsey said. But “absent any magic bullet, where these platforms ensure only the best truthful information is relayed, I think it’s the best we can do.”
Jeremy Merrill and Faiz Siddiqui contributed to this report.