Last year, Princeton researchers revealed a powerful new ad-blocking technique: perceptual ad-blocking uses a machine-learning model trained on images of pages with the ads identified to make predictions about which page elements are ads to block and which parts are not.
However, a new paper from a group of Stanford and CISPA Helmholtz Center researchers reveals a powerful machine learning countermeasure that, they say, will permanently tilt the advantage toward advertisers and away from ad-blockers.
The team revealed a set of eight techniques to generate adversarial examples of slightly modified ads that completely flummoxed the perceptual ad-blocker's model: from overlaying a transparent image to modifying a few pixels in the logo used to demarcate an ad.
What's more, the team showed that they could cause the perceptual blocker's model to erroneously identify a page's actual content as an ad and block it, while leaving the ads unblocked.
The team says that these techniques will always outrace the ability of perceptual blocking models to detect them, suggesting that perceptual blocking may be a dead letter.
We note that detection of adversarial examples [27, 47]—a simpler problem in principle but also one far from solved [14]— may not be applicable to ad-blockers. Indeed, ad-blockers face both adversarial false-positives and false-negatives, so merely detecting a perturbation does not help in decision-making. This challenging threat model also applies in part to ad-blockers based on non-visual cues, e.g., ML-based ad-blockers that use similar features as filter lists [11, 29, 36]. None of these have yet been evaluated against adaptive adversaries.
Moreover, by virtue of not relying on visual cues, these models are presumably easier to attack in ways that are fully transparent to users (e.g., switching ad domains)
Ad-versarial: Defeating Perceptual Ad-Blocking [Florian Tramèr, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino and Dan Boneh/Arxiv]
Researchers Defeat Most Powerful Ad Blockers, Declare a 'New Arms Race' [Daniel Oberhaus/Motherboard]
(via /.)