Millions of TikTokkers have watched some version of a video in the past week falsely stating that "they're installing incinerators at Alligator Alcatraz," referring to an internet conspiracy theory that furnaces were being set up at a state-run immigration detention facility in the Florida Everglades, which spread widely despite having no evidence.
One of the videos circulating the rumor attracted nearly 20 million views. It spurred a conversation on TikTok, with creators weighing in with their own takes and, in a handful of instances, attempting to debunk the baseless theory.
But there is one account whose tactics stand out in this familiar cacophony of messy online virality: a realistic-looking TikTokker giving a direct-to-camera description of the incinerator conspiracy theory. The speaker's image and voice appear to have been created with artificial intelligence tools, according to two forensic media experts NPR consulted. The twist: The words spoken in the video are the exact same as those in another video posted by a different TikTok account days before. The copied version attracted more than 200,000 views on TikTok.
Researchers of deepfakes — AI-generated images and videos so plausible they can trick people into thinking they're real — say this duplication appears to represent a new way that AI is being harnessed to deceive: having AI recite the actual words of a real person, right down to the stumbles, "ums" and "uhs."
Ali Palmer, a creator in Dallas who posts on TikTok as @aliunfiltered_, made a video — about a father who jumped off a Disney cruise ship to save his child — that was ripped off using her exact words for a video made with AI.
Copying on TikTok is rampant, she said, but usually the spam accounts that do it repost her entire video. The AI-powered accounts reciting her words by an AI-generated person is new, she said.
"It's upsetting. It feels like a violation of privacy," said Palmer, 33. "I could never imagine copying someone and then making money on it. Just feels dirty."
With all types of copying, Palmer has reported it to TikTok, but nothing happens. "It's incredibly frustrating."
Hany Farid, a professor at the University of California, Berkeley who studies digital forensics, said what is new here is that an average person's words are being stolen.
"All the time, we're seeing people's identities being co-opted to do things like hawk crypto scams or push bogus cures for cancer, but usually it's a famous person or influencer," Farid said.
Farid used a digital forensic tool to analyze the copied incinerator video, the Disney cruise video and other videos posted by the same account, at NPR's request, and concluded they were the products of AI.
"It's the kind of thing that would be super-easy to do with today's AI tools and something that would easily slip through the content moderation cracks," he said.
While using AI to copy videos doesn't appear to violate TikTok's policies, the platform does say it requires users "to label all AI-generated content that contains realistic images, audio, and video." The copied videos NPR identified are not labeled as AI-generated.
Deepfakes have been growing more sophisticated in recent years, in addition to being increasingly deployed for malicious purposes.
The technology has been used to impersonate politicians, including Secretary of State Marco Rubio, former President Joe Biden and Ukrainian President Volodymyr Zelenskyy. The rise of deepfake "nudify" tools prompted Congress this year to pass a federal law to combat the spread of nonconsensual intimate imagery, including fake nudes generated by AI.
Further blurring fact from fiction is this latest approach of trying to capitalize on a TikTok viral moment by having a fictitious creator recite the words of a real creator.
It is difficult to gauge how widespread the practice is on TikTok, which is used by more than 1 billion people worldwide. NPR could not identify the people or motivations behind the accounts replicating creators' words. The accounts didn't respond to requests for comment. Neither did TikTok or most of the creators whose words were cribbed.
Two accounts identified by NPR as using other creators' words and apparently using AI-generated images and voices do have some similarities: They each have about 10,000 followers but do not follow anyone back. Many of the videos posted by both accounts depict Black personas that Berkeley's Farid said appear to be "low-quality deepfakes" made with AI. And each account pilfered another TikTok creator's words about various viral topics, ranging from a woman who received a face-lift in Guadalajara, Mexico, to a woman wearing a dog collar to the former Meghan Markle dancing in a hospital delivery room.
In the Meghan video, the creator took on a British accent. In others, the voice assumes a completely different register.
"The biggest tell is when you take a step back and look at the account as a whole — when you look at one video compared to another video, it's clear that the voice of the persona changes from video to video," said Darren Linvill, a communications professor and co-director of Clemson University's Media Forensics Hub. Linvill also reviewed the videos at NPR's request and concluded they were made with the help of AI tools.
Together, the two accounts have rung up millions of views by seizing on viral stories that skew more toward tabloid fodder than political drama. But researchers who track state-sponsored information warfare say testing out new strategies for juicing virality is something government-backed actors also regularly do.
Linvill studies how nations including China, Russia and Iran use digital tools for propaganda. He says creating AI personas, such as faux news anchors, is a tactic also used by state-backed influence operations. While NPR found no indication that the accounts it identified are part of such a campaign, there is often overlap in tactics used by deceptive state actors and accounts seeking to gain engagement on social media platforms.
"Stealing content on social media is as old as social media," Linvill said. "What we're seeing here is AI doing what we've seen AI be really good at over and over again, and that is systematizing things, making processes cheaper, faster and easier. And AI is really good at making it faster and easier to steal content."
Alex Hendricks was scrolling through TikTok this week when he saw two back-to-back videos about the Florida incinerator conspiracy. It's normal to see many creators weighing in on the same subject, but these videos struck him as unusual because the monologues were eerily identical, just in different-sounding voices.
So Hendricks made a TikTok video pointing this out. Yet it barely got any views.
"I know there's a lot of fake news on TikTok, and I've been skeptical of anything because of AI, but this kind of replication felt new and crazy," said Hendricks, 32, who works in retail in Montana. "Which is why I tell everyone: Don't believe anything that you see. And cross-reference everything you see before you share it on TikTok," he said. "But I'm not sure they'll listen."
Copyright 2025 NPR