Google suspends 39.2 million malicious advertisers in 2024

Picture of Faizel Patel

By Faizel Patel

Senior Journalist


Google confirmed that it used a collection of newly upgraded AI models to scan for bad ads.


Google suspended 39.2 million malicious advertisers in 2024 using artificial intelligence (AI).  

The search giant published its 2024 Ads Safety Report last week, confirming that it used a collection of newly upgraded AI models to scan for bad ads.

The result is a significant increase in suspended spammer and scammer accounts, with fewer malicious ads reaching people online before they are flagged.

Ads and scams

In 2024, Google also blocked or removed more than 5.1 billion ads and restricted an additional 9.1 billion, according to the report.

With concerns about digital scams, particularly in South Africa, where there has been a rise in fake news across platforms, Google said its report offers timely insights into how AI is being used to protect voters, businesses, and everyday users.

ALSO READ: Google’s ‘Nowcasting’ uses AI for weather forecasting in Africa

Global efforts

The report also highlights global efforts to tackle issues like AI-generated impersonation scams (which resulted in more than 700,000 account suspensions and a 90% drop in reports) and political ad transparency (with over 10 million unverified election ads removed).

“These developments are especially relevant as more South Africans rely on online tools to build businesses, engage with content, and access public information.”

Enhanced AI

Google said it deployed more than 50 enhanced large language models (LLMs) to help enforce its ad policy in 2024.

It stated that its efforts in 2024 resulted in 39.2 million ad accounts being suspended for fraudulent activities. This is three times more than the number of suspended accounts in 2023 (12.7 million).

The factors that trigger a suspension usually include ad network abuse, improper use of personalisation data, false medical claims, trademark infringement, or a mix of violations.

LLM’s

Some 97% of Google’s advertising enforcement involved LLM AI models, which reportedly require even less data to make a determination. Therefore, it’s feasible to tackle rapidly evolving scam tactics.

“This shift toward proactive prevention comes at a critical time. Across Africa and beyond, users are navigating a rapidly evolving digital environment where trust, safety, and transparency matter more than ever.

“In South Africa, the persistence of fake news online has raised concerns around the spread of misinformation. That’s why in 2024, Google updated its Misrepresentation policy, assembled a global team of over 100 experts, and took down over 700,000 scam-related advertiser accounts—contributing to a 90% drop in reported impersonation scams,” Google said.

Transparency

With nearly half the world’s population heading to the polls in 2024, Google also expanded election ad transparency, requiring all political advertisers to verify their identities and clearly disclose who’s paying for the message.

More than 10 million election-related ads were removed globally for failing to meet these standards.

Google said that while these are global figures, their local impact is deeply personal.

“From the business owner trying to reach new customers online to the everyday user trying to avoid a phishing scam, online safety remains essential for an open, trustworthy web.

“Across the continent, safe advertising also helps protect livelihoods—ensuring that small businesses, creators, and publishers can continue to benefit from a free and accessible internet,” Google said.

ALSO READ: Google says SA news industry needs ‘innovation and collaboration’ after Competition Commission report

Share this article

Read more on these topics

advertising google scam scammers tech

Download our app