From an advertisement for an herbal remedy that promises to cure all to a video featuring a voice that sounds just like a movie star, you’ve surely encountered spam and scam advertisements online. And they have likely been created with artificial intelligence.
The accessibility of generative AI tools has exacerbated the perennial issue of online spam and scams that’s persisted since the advent of the internet. And while creators of such content have access to this ever-evolving technology, tech giants are also honing their internal AI systems to fight the deluge.
“It's not that this is a new problem. It is an old problem, supercharged,” said Nate Elliott, a principal analyst at Emarketer. “The biggest difference is the speed and the scale that AI offers both the good actors and the bad actors.”
The FBI’s recent Internet Crime Report detailed receiving more than 22,000 complaints reporting AI-related scams last year, with the total losses associated with those complaints exceeding $893 million.
Google released its annual ads safety report Thursday, acknowledging that scammers are increasingly trying to run sophisticated, malicious ads but emphasizing that its AI-powered tools are strong defenders.
Google's generative AI technology known as Gemini was able to catch over 99% of policy-violating ads before they ever reached an audience last year.
In 2025, the company blocked or removed more than 8.3 billion ads, including 602 million ads with policy violations that are most closely associated with scams. That's up from a total of 5.1 billion ads blocked or removed in 2024. About 24.9 million advertiser accounts were suspended last year, more than 4 million of those for scam-related activity.
Google has long been a dominant force in the digital advertising world. The company saw more than $200 billion in net worldwide ad revenues last year according to data from Emarketer, but the research firm predicts Meta will outperform Google in 2026.
Google said it has a team of thousands of people working to create and enforce its advertising policies at scale. Keerat Sharma, Google's vice president and general manager of ads privacy and safety, said the advancement of generative AI as a part of Google's defense system has led to more powerful results in combatting problematic content.
Gemini now allows the team to analyze hundreds of billions of signals — including account age, behavioral cues and campaign patterns — to better understand the “nuance of what an advertiser's intent actually is,” Sharma said. This means they're able to largely determine legitimacy or whether an advertiser's intent could be malicious. Reaching that nuance has also helped keep real businesses' ads online, with the report detailing that incorrect advertiser suspensions were reduced by 80% last year.
Gemini has also helped with speed, Sharma said. Analyzing the digital assets in an ad used to take anywhere from a few seconds to minutes or even longer, but now, Sharma said that can happen in milliseconds. That “allows us to stop things right at the front door," he said. Google also relies on several other defense mechanisms, like an expansive advertiser verification program, that work together to fortify protections.
The kind of content that Google is aiming to block and remove is vast and varied. Bad ads could take shape as “all the forms of spam and scam that have always existed, just people are able to produce them faster and at higher volume,” Elliott said.
A Google spokesperson said the company doesn't report the number of AI-generated ads it blocks or removes because its enforcement isn’t based on how an ad was created, but rather which policies it violates. The spokesperson noted that many AI-generated ads come from legitimate businesses and comply with Google's policies.
Experts who spoke with The Associated Press said the push and pull between AI-powered scams and AI-powered defense mechanisms will endure as the technology advances.
“We’re already close, but it’s going to be heading even more to (where) it’s just AI versus AI,” said Matt Seitz, the director of the AI Hub at the University of Wisconsin-Madison. “The volume of this problem is so large that it can’t be managed directly through humans.”