In the waning months of 2023 and early 2024, Google faced a targeted campaign of ads using the likenesses of public figures aiming to scam its users, often through the use of deepfakes. The search giant deployed its own artificial intelligence tools to combat the effort.
“When we detected this threat, we created a dedicated team to respond
immediately,” the company said Thursday in its 2023 Ads Safety Report. “We pinpointed patterns in the bad actors’ behavior, trained our automated enforcement models to detect similar ads, and began removing them at scale.”
This sort of boost from AI enabled Google to block or remove over 5.5 billion ads that violated its policies and suspend more than 12.7 million advertiser accounts in 2023, nearly double the number of account suspensions in 2022, the search giant said.
“This new technology introduced significant and exciting changes and challenges to the digital advertising industry, and presents a unique opportunity to improve our enforcement efforts significantly,” Google said in the report.
The largest number of blocked or removed adds – more than 1 billion- reflected ad network “abuse,” including promoting malware. It also flagged nearly 550 million trademark violations, 206.5 million ads for violation misrepresentation policies, including scam tactics and 273.4 million ads for violating its financial services policies.
Using AI’s large language models, Google said it’s able to “rapidly review and interpret content at a high volume, while also capturing important nuances within that content.”
“These advanced reasoning capabilities have already resulted in larger-scale and more precise enforcement decisions on some of our more complex policies,” the report said.
As an example, it pointed to its policy against unreliable financial claims, which includes ads promoting get-rich-quick schemes. “The bad actors behind these types of ads have grown more sophisticated,” the report said. “They adjust their tactics and tailor ads around new financial services or products, such as investment advice or digital currencies, to scam users.”
While traditional efforts would detect such policy violations, “the fast-paced and ever-changing nature of financial trends make it, at times, harder to differentiate between legitimate and fake services, and quickly scale our automated enforcement systems to combat scams,” Google said. Large language models can more quickly recognize new trends and identify the patterns of bad actors leaping on those trends, enabling the company to identify a legitimate business from a get-rich-quick scam.
“This has helped our teams become even more nimble in confronting emerging threats of all kinds,” Google said. Overall, it’s now using these new models for more than 90% of its ad policing.
In addition to blocking 5.5 billion ads, Google also restricted over 6.9 billion ads last year.
Together, it’s blocked or restricted ads from more than 2.1 billion publisher pages and across more than 395,000 publisher sites, up from 143,000 in 2021. Sexual content was by far the largest category of publisher enforcement, with more than 1.8 billion pages targeted.
“We’ve only just begun to leverage the power of LLMs for ads safety,” the company said.
The efforts take on new importance as 2024’s elections move forward in the U.S. and multiple other countries. In 2023, the company said it verified more than 5,000 new election advertisers and removed more than 7.3M election ads that came from advertisers who did not complete its verification process. It was also the first company to require that election ads that contain synthetic content include disclosures.
Google took in $237.9 billion in advertising revenue in 2023, parent company Alphabet said it its fourth-quarter report.