Steve Harvey is best known for awarding money to “Family Feud” contestants or dishing out advice on his radio show.
But in recent years, he’s also become a popular target of AI-generated memes, many of which are humorous and seemingly harmless — like depictions of Harvey as a rockstar or seen running from demons.
More sinister actors, however, are using AI-generated versions of Harvey’s image, voice and likeness for scams.
Last year, Harvey was among celebrities like Taylor Swift and Joe Rogan whose voices were mimicked by AI and used to promote a scam that promised people government provided funds.
“I’ve been telling you guys for months to claim this free $6,400 dollars,” a voice that sounds like Harvey’s says in one video.
Now, Harvey is speaking up by advocating for legislation and penalties for the people behind these scams — and the platforms hosting them. And Congress seems to be listening; it’s considering several pieces of legislation aiming to penalize those behind nefarious uses of AI, including an updated version of the No Fakes Act, which aims to hold creators and platforms liable for unauthorized AI-generated images, videos and sound.
The bipartisan group of senators behind the act, including Democrats Chris Coons of Delaware and Amy Klobuchar of Minnesota and Republicans Marsha Blackburn of Tennessee and Thom Tillis of North Carolina, are planning to reintroduce it within the next few weeks, a source familiar with the matter told CNN. It joins other legislation aimed at criminalizing AI-generated deepfake pornography, called the Take It Down Act, which is also before Congress and earned support from first lady Melania Trump this week.
In 2025, Harvey says scams using his likeness are at “an all-time high.”
“I prided myself on my brand being one of authenticity, and people know that, and so they take the fact that I’m known and trusted as an authentic person, pretty sincere,” Harvey told CNN in an interview at Tyler Perry Studios between filming episodes of “Family Feud.” “My concern now is the people that it affects. I don’t want fans of mine or people who aren’t fans to be hurt by something.”

Celebrities call for action against AI deepfakes
Major recording artists, actors and other celebrities have been caught up in AI scandals over the last two years as the technology rapidly evolves. A woman in France lost $850,000 after scammers used AI-generated images of Brad Pitt to con her into thinking she was helping the actor.
Actress Scarlett Johansson, who has openly grappled with AI imitating her likeness, has also thrown her support behind legislation.
“There is a 1,000-foot wave coming regarding AI that several progressive countries, not including the United States, have responded to in a responsible manner,” Johansson said in a February statement after an AI-generated video depicting a phony version of her responding to Kanye West’s antisemitic remarks went viral. “It is terrifying that the US government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of AI.”
Harvey said he also supports the legislation, which has garnered support from the Recording Academy, the Screen Actors Guild, the Motion Picture Association, major talent agencies and some of the biggest names in Hollywood.
“It’s freedom of speech, it’s not freedom of, ‘make me speak the way you want me to speak,’” Harvey said. “That’s not freedom, that’s abuse. And Congress has got to get involved in this thing, because it’s going to end up hurting them, too.”
Before reintroducing the No Fakes Act, the senators also hope to gain support from online platforms, which would potentially be penalized for hosting that AI content under the bill. The current bill fines platforms $5,000 for each violation –— meaning a viral AI creation could quickly add up to millions of dollars in fines. A source familiar with the bill said the platforms would not be brought on at the cost of lower penalties.
“We’ve been very clear with the platforms who have withheld their endorsement for now that we’re not going to make any changes unless the folks that we’re doing this bill for, the folks in the creative industries, are okay with it,” a person familiar with the bill told us. “We’re not going to sell them off to try to get the platforms on board.”
But critics of the bill, which include public advocacy organizations like Public Knowledge, Center for Democracy and Technology, American Library Association and the Electronic Frontier Foundation, worry the bill as written introduces too much regulation. In a letter to the senators last year, they warned it could endanger First Amendment rights and enable misinformation, while resulting in a “torrent” of lawsuits.
“We understand and share the serious concerns many have expressed about the ways digital replica technology can be misused, with harms that can impact ordinary people as well as performers and celebrities,” they wrote. “These harms deserve the serious attention they are receiving, and preventing them may well involve legislation to fill gaps in existing law. Unfortunately, the recently-introduced NO FAKES bill goes too far in introducing an entirely new federal IP right.”
The startup helping celebrities spot AI deepfakes

But as Congress goes through the motions, AI continues to evolve. And celebrities say they feel they are limited in how they can pursue imitators using their likeness — especially anonymous online accounts.
That’s where companies like Vermillio AI come in. The company, which has partnered with major talent agencies and movie studios, uses a platform called TraceID that tracks AI instances of their clients and automates the normally cumbersome take-down requests.
“Back in 2018 there were maybe 19,000 pieces of deepfake content,” Vermillio CEO Dan Neely told CNN in an interview. “Today, there are roughly a million created every minute.”
For celebrities, tracking deepfakes can be especially challenging because they can spread so quickly on social platforms. “So trying to find them, play this game of Whack a Mole, is quite complex,” Neely said.
Neely showed CNN Harvey’s Vermillio account, which included AI-voice generated chatbots meant to sound like Harvey and fake videos of the TV personality encouraging gambling.
Neely said the company’s technology uses a type of “fingerprinting” to distinguish authentic content and from AI-generated material. That involves crawling the web for images that have been tampered with using large language models, better known as LLMs, which are the building blocks of many popular generative AI services.
“An image of you is made up of millions of pieces of data,” Neely said. “How do we use those pieces of data to go and find that where it exists across the internet?”
Celebrities can afford a service like Vermillio. But for other creators, there are fewer resources.
“The sooner we do something, I think the better off we’ll all be,” Harvey said. “Because, I mean, why wait? How many people we got to watch get hurt by this before somebody does something?”