Cybersecurity researchers have unmasked a novel ad fraud scheme that has been found to leverage search engine poisoning (SEO) techniques and artificial intelligence (AI)-generated content to push deceptive news stories into Google’s Discover feed and trick users into enabling persistent browser notifications that lead to scareware and financial scams. The sophisticated operation, codenamed Pushpaganda by HUMAN’s Satori Threat Intelligence and Research Team, marks a significant escalation in the ongoing battle against digital advertising fraud, demonstrating how malicious actors are increasingly weaponizing advanced technologies to exploit trusted online platforms.
The campaign specifically targets the personalized content feeds of Android and Chrome users, exploiting the inherent trust users place in platforms like Google Discover. At its peak, the Pushpaganda operation exhibited an alarming scale, with approximately 240 million bid requests associated with 113 domains linked to the campaign observed over a mere seven-day period. Initially identified as primarily targeting users in India, the threat has since demonstrated a rapid geographical expansion, extending its reach to developed nations including the U.S., Australia, Canada, South Africa, and the U.K., underscoring the global nature and adaptability of modern ad fraud enterprises.
The Mechanics of Deception: SEO Poisoning, AI Content, and Push Notifications
The modus operandi of Pushpaganda is intricately designed to ensnare unsuspecting individuals. The entire scheme hinges on the scammers’ ability to lure users through Google Discover, a personalized content feed that provides users with news and articles based on their interests and search history. By employing sophisticated SEO poisoning techniques, the threat actors manipulate search algorithms to ensure their deceptive content appears prominently within these feeds. This involves tactics such as keyword stuffing, link manipulation, and creating numerous low-quality pages to artificially boost their ranking, effectively "poisoning" the search results that feed into Discover.
Once a user clicks on one of these seemingly legitimate news stories, they are redirected to actor-controlled domains. These domains are meticulously crafted to host AI-generated content, designed to mimic genuine news outlets, further lending credibility to the fraudulent narratives. This AI-generated content often features alarming headlines or sensationalized stories, such as "Your device is infected!" or "Legal action pending for copyright infringement," designed to capture immediate attention and bypass initial scrutiny. The sophistication of modern generative AI allows for the creation of grammatically correct and contextually relevant, albeit fabricated, articles that can easily deceive an unwary reader.
Upon landing on one of these compromised sites, users are then coerced into enabling push notifications. This coercion often takes the form of pop-up prompts that mimic legitimate requests for notification access, or even leverage social engineering tactics by suggesting that enabling notifications is necessary to continue reading the article, verify identity, or to receive "important updates." Once these notifications are enabled, the malicious actors gain a persistent channel to deliver a barrage of fake legal threats, urgent warnings, and financial scams directly to the user’s device, bypassing traditional email or messaging spam filters. These scareware notifications are engineered to create a sense of panic and urgency, prompting users to click without critical thought. Clicking these notifications invariably redirects users to additional sites operated by the threat actors. These subsequent redirects serve a dual purpose: they generate illicit organic traffic to embedded advertisements on these sites, thereby generating fraudulent revenue for the attackers, and further expose the user to more elaborate financial scams or malware downloads.

Gavin Reid, Chief Information Security Officer at HUMAN, emphasized the gravity of the situation, stating, "The findings demonstrate how threat actors abuse AI to hijack trusted discovery surfaces and turn them into delivery vehicles for scareware, deepfakes, and financial fraud." This statement highlights a concerning trend where advanced AI capabilities, originally developed for legitimate purposes, are being weaponized by cybercriminals to amplify the reach and effectiveness of their illicit operations. The mention of "deepfakes" suggests that the capabilities for deception could extend beyond text to manipulated images or videos in future iterations of such schemes.
A Chronology of Evolving Threats: Push Notifications as a Weapon
The use of push notifications as a vector for malicious activities is not a new phenomenon, but Pushpaganda represents a significant evolution in its sophistication. Malware-based threats involving push notifications, for both web and mobile platforms, have been a recurring challenge for cybersecurity professionals. Lindsay Kaye, Vice President of Threat Intelligence at HUMAN Security, explained the psychological leverage employed by these campaigns: "In many cases, users are quick to click, either to make them go away or to get more information, making them an effective tool in a malware author’s arsenal." The inherent sense of urgency created by these notifications often overrides a user’s natural caution, making them highly susceptible to manipulation.
In September 2025, Infoblox shed light on a prominent threat actor known as Vane Viper, which had engaged in systematic push notification abuse to serve ads and facilitate ClickFix-style social engineering campaigns. This precedent demonstrates a clear, albeit evolving, lineage of attacks that leverage the persistent nature of push notifications. While Vane Viper focused more on direct ad serving and social engineering, Pushpaganda integrates AI-generated content and SEO poisoning to achieve a broader initial reach and a more convincing deception, pushing the boundaries of these attack vectors.
The broader landscape of ad fraud, within which Pushpaganda operates, is vast and continuously adapting. Just a little over a month prior to the disclosure of Pushpaganda, HUMAN identified a separate, massive ad fraud laundering marketplace, codenamed Low5. This operation involved a collection of over 3,000 domains and 63 Android apps, designed to serve as cashout sites for sophisticated fraud schemes, including the infamous BADBOX 2.0 botnet. Low5 was described as one of the largest ad fraud laundering marketplaces ever uncovered, peaking at approximately 2 billion bid requests a day and potentially operating on as many as 40 million devices worldwide. The apps associated with Low5 contained code instructing user devices to visit scheme-connected domains and click on ads, generating fraudulent impressions and clicks.
The existence of such "cashout sites" or "ghost sites" is crucial to understanding the economics of ad fraud. These sites conduct content-driven fraud by selling ad space to advertisers who are under the impression that their ads will be viewed by genuine human users. In reality, these views and clicks are generated by bots or, in the case of Pushpaganda, by real users tricked into visiting these sites. The Android apps linked to Low5 have since been removed from the Google Play Store, but the underlying infrastructure often remains resilient. HUMAN’s analysis of Low5 underscored a critical challenge: "A shared monetization layer spanning more than 3,000 domains allows multiple threat actors to plug into the same infrastructure, creating a distributed laundering system that increases threat resilience, complicates attribution, and enables rapid replication." This distributed model makes it incredibly difficult for security teams to dismantle these operations entirely, as taking down one component does not necessarily incapacitate the entire network.
Google’s Proactive Response and Policy Enforcement

Upon learning of the Pushpaganda campaign, Google swiftly responded, confirming that they had rolled out a fix to address the specific spam issue in question even prior to the public disclosure of HUMAN’s report. A Google spokesperson emphasized the company’s continuous efforts to combat such threats, stating, "We keep the vast majority of spam out of Discover through robust spam-fighting systems and policies against emerging forms of low quality, manipulative content." This proactive stance highlights the ongoing arms race between platform security teams and cybercriminals.
Google further detailed its comprehensive strategy to maintain the integrity of its platforms. The company has instituted robust spam policies and employs sophisticated spam-fighting systems designed to tackle abusive practices that lead to unoriginal, low-quality content surfacing in both Search and Discover. To adapt to the evolving tactics of threat actors, Google regularly rolls out algorithmic updates specifically designed to flag policy-violating content that attempts to manipulate Search and News rankings. These updates are crucial for keeping pace with the dynamic techniques employed by fraudsters.
Regarding the specific issue of AI-generated content, Google’s guidance is explicit: any use of AI to generate content primarily to manipulate search rankings is a direct violation of its spam policies. This includes instances of "scaled content abuse," which encompasses using generative AI tools or similar offerings to produce pages that offer no genuine value for users; scraping content from various sources (feeds, search results, etc.) without adding substantial value; and creating multiple sites with the intent of concealing the scaled nature of the content production. These policies aim to ensure that Google’s platforms remain a source of valuable, original information, rather than being flooded with AI-generated garbage designed purely for manipulation.
The update to the story on April 15, 2026, with Google’s response, underscores the dynamic nature of these cyber threats and the collaborative efforts required between cybersecurity researchers and platform providers to mitigate them effectively. It also highlights the company’s commitment to maintaining a high bar for quality content across its discovery services.
Broader Impact and Implications for the Digital Ecosystem
The Pushpaganda operation, alongside its predecessors like Vane Viper and contemporaries like Low5, illuminates several critical implications for the digital ecosystem, users, and the cybersecurity industry.
Erosion of Trust in Digital Platforms: Perhaps the most significant long-term impact is the erosion of user trust in legitimate online content and discovery platforms. When users encounter deceptive news stories, even those generated by AI, within trusted environments like Google Discover, their ability to discern truth from falsehood is compromised. This not only makes them more susceptible to future scams but also fosters a general skepticism towards all online information, including genuine news. For platforms like Google, maintaining user trust is paramount, making these ad fraud schemes a direct threat to their core business model and reputation.

Financial Drain on Advertisers: Advertisers are also direct victims of these schemes. They pay for impressions and clicks that are either generated by bots or by users who have been tricked into visiting fraudulent sites, meaning their advertising budgets are wasted on non-converting, invalid traffic. This financial drain can be substantial, leading to decreased ROI for advertising campaigns and a general reluctance to invest heavily in programmatic advertising if fraud remains rampant. The need for robust ad verification and fraud detection services becomes even more critical in this environment. Estimates suggest that ad fraud costs advertisers billions of dollars annually, and schemes like Pushpaganda contribute significantly to this global problem.
The Escalating AI Arms Race: The weaponization of AI by threat actors, as seen in Pushpaganda’s use of AI-generated content, signifies a new frontier in cyber warfare. AI’s ability to create convincing text, images, and even deepfakes at scale makes it harder for automated detection systems to differentiate between legitimate and fraudulent content. This necessitates the development of even more advanced AI-driven defenses capable of identifying subtle anomalies and patterns indicative of malicious AI usage. The cybersecurity industry finds itself in an escalating "AI arms race," where both offensive and defensive capabilities are continually evolving. Researchers are now exploring techniques like AI watermarking and anomaly detection to counter this trend.
Critical Need for User Vigilance and Education: For the average internet user, Pushpaganda serves as a stark reminder of the constant need for vigilance. The ease with which users can be coerced into enabling push notifications, coupled with the convincing nature of AI-generated content, highlights a gap in digital literacy. Educational initiatives focusing on identifying suspicious notifications, scrutinizing the source of news, and understanding the risks associated with granting browser permissions are more crucial than ever. Users must be empowered with the knowledge to protect themselves from these increasingly sophisticated social engineering tactics. Simple practices like checking URLs, looking for grammatical errors in content, and being wary of sensational headlines can go a long way.
Resilience of Fraud Infrastructure and Attribution Challenges: The insights from the Low5 operation – particularly the concept of a shared monetization layer and the resilience of cashout domains – reveal a fundamental challenge in combating large-scale ad fraud. Even if specific campaigns or malicious apps are shut down, the underlying infrastructure can be quickly repurposed by other threat actors. This "plug-and-play" nature of fraud infrastructure means that remediation efforts must be continuous and aggressive, focusing not just on individual campaigns but on dismantling the broader networks that support them. As HUMAN stated, "Low5 reinforces the need for continuous, aggressive threat intelligence and detection expertise to hunt down cashout domains and flag them pre-bid." This emphasizes a shift from reactive measures to proactive threat hunting and intelligence sharing within the industry. Attribution also remains a significant hurdle, as threat actors often employ layers of obfuscation to hide their identities and locations.
Regulatory and Policy Implications: The increasing sophistication and global reach of ad fraud, particularly with AI involvement, could lead to greater regulatory scrutiny. Governments and international bodies may consider stricter regulations on digital advertising platforms, content moderation, and the use of AI, especially concerning transparency and accountability. The pressure on tech giants to invest more heavily in security and content integrity will likely intensify, potentially leading to new compliance requirements and industry standards.
Conclusion:
The unmasking of Pushpaganda by HUMAN Security underscores the evolving and increasingly complex nature of ad fraud. By seamlessly integrating SEO poisoning, AI-generated content, and the persuasive power of push notifications, threat actors are devising multi-layered schemes that are challenging to detect and dismantle. While platforms like Google are actively deploying fixes and enhancing their spam-fighting systems, the resilience of fraud infrastructure and the constant innovation by cybercriminals necessitate a continuous, collaborative effort between security researchers, platform providers, and regulatory bodies. For users, heightened awareness, critical engagement with online content, and a healthy skepticism toward unsolicited notifications are essential safeguards against falling victim to these pervasive and financially damaging schemes. The digital landscape remains a dynamic battleground where intelligence, technology, and vigilance are the primary weapons against those seeking to exploit trust for illicit gain.










