A new research paper written by a team of academics and computer scientists from Spain and Austria has demonstrated that it’s possible to use Facebook’s targeting tools to deliver an ad exclusively to a single individual if you know enough about the interests Facebook’s platform assigns them.
The paper — entitled Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data — describes a “data-driven model” that defines a metric showing the probability a Facebook user can be uniquely identified based on interests attached to them by the ad platform.
The researchers demonstrate that they were able to use Facebook’s Custom Audience tool to target a number of ads in such a way that each ad only reached a single, intended Facebook user.
The research raises fresh questions about potentially harmful uses of Facebook’s ad targeting tools, and — more broadly — questions about the legality of the tech giant’s personal data processing empire given that the information it collects on people can be used to uniquely identify individuals, picking them out of the crowd of others on its platform even purely based on their interests.
The findings could increase pressure on lawmakers to ban or phase out behavioral advertising — which has been under attack for years, over concerns it poses a smorgasbord of individual and societal harms. And, at the least, the paper seems likely to drive calls for robust checks and balances on how such invasive tools can be used.
The findings also underscore the importance of independent research being able to interrogate algorithmic adtech — and should increase pressure on platforms not to close down researchers’ access.
Interests on Facebook are personal data
“The results from our model reveal that the 4 rarest interests or 22 random interests from the interests set FB [Facebook] assigns to a user make them unique on FB with a 90% probability,” write the researchers from Madrid’s University Carlos III, the Graz University of Technology in Austria and the Spanish IT company, GTD System & Software Engineering, detailing one key finding — that having a rare interest or lots of interests that Facebook knows about can make you easily identifiable on its platform, even among a sea of billions of other users.
“In this paper, we present, to the best of our knowledge, the first study that addresses individuals’ uniqueness considering a user base at the worldwide population’s order of magnitude,” they go on, referring to the scale inherent in Facebook’s data mining of its 2.8BN+ active users (NB: the company also processes information about non-users, meaning its reach scales to even more Internet users than are active on Facebook).
The researchers suggest the paper presents the first evidence of “the possibility of systematically exploiting the FB advertising platform to implement nanotargeting based on non-PII [interest-based] data”.
There have been earlier controversies over Facebook’s ad platform being a conduit for one-to-one manipulative — such as this 2019 Daily Dot article about a company called the Spinner which was selling a ‘service’ to sex-frustrated husbands to target psychologically manipulative messages at their wives and girlfriends. The suggestive, subliminally manipulative ads would pop up on the targets’ Facebook and Instagram feeds.
The research paper also references an incident in UK political life, back in 2017, when Labour Party campaign chiefs apparently successfully used Facebook’s Custom Audience ad-targeting tool to ‘pull the wool’ over former leader Jeremy Corbyn’s eyes. But in that case the targeting was not just at Corbyn; it also reached his associates, and a few aligned journalists.
With this research the team demonstrates it’s possible to use Facebook’s Custom Audience tool to target ads at just one Facebook user — a process they’re referring to as “nanotargeting” (vs the current adtech ‘standard’ of microtargeting ‘interest-based’ advertising at groups of users).
“We run an experiment through 21 Facebook ad campaigns that target three of the authors of this paper to prove that, if an advertiser knows enough interests from a user, the Facebook Advertising Platform can be systematically exploited to deliver ads exclusively to a specific user,” they write, adding that the paper provides “the first empirical evidence” that one-to-one/nanotargeting can be “systematically implemented on FB by just knowing a random set of interests of the targeted user”.
The interest data they used for their analysis was collected from 2,390 Facebook users via a browser extension they created that the users had installed before January 2017.
The extension, called Data Valuation Tool for Facebook Users, parsed each user’s Facebook ad preferences page to gather the interests assigned to them, as well as providing a real-time estimate about the revenue they generate for Facebook based on the ads they receive while browsing the platform.
While the interest data was gathered before 2017, the researchers’ experiments testing whether one-to-one targeting is possible through Facebook’s ad platform took place last year.
“In particular, we have configured nanotargeting ad campaigns targeting three authors of this paper,” they explain, discussing the results of their tests. “We tested the results of our data-driven model by creating tailored audiences for each targeted author using combinations of 5, 7, 9, 12, 18, 20, and 22 randomly selected interests from the list of interests FB had assigned them.
“In total, we ran 21 ad campaigns between October and November 2020 to demonstrate that nanotargeting is feasible today. Our experiment validates the results of our model, showing that if an attacker knows 18+ random interests from a user, they will be able to nanotarget them with a very high probability. In particular, 8 out of the 9 ad campaigns that used 18+ interests in our experiment successfully nanotargeted the chosen user.”
So having 18 or more Facebook interests just got really interesting to anyone who wants to manipulate you.
Nothing to stop nanotargeting
One way to prevent one-to-one targeting would be if Facebook were to put a robust a limit on the minimum audience size.
Per the paper, the adtech giant provides a “Potential Reach” value to advertisers using its Ads Campaign Manager tool if the potential audience size for a campaign is greater than 1,000 (or greater than 20, prior to 2018 when Facebook increased the limit).
However the researchers found that Facebook does not actually prevent advertisers running a campaign targeting fewer users than those potential reach limits — the platform just does not tell advertisers how many (or, well, few) people their messaging will reach.
They were able to demonstrate this by running multiple campaigns that successfully targeted a single Facebook user — validating that the audience size for their ads was one by looking at data generated by Facebook’s ad reporting tools (“FB reported that only one user had been reached”); having a log record in their web server generated by the (sole) user click on the ad; and — in a third validation step — they asked each nanotargeted user to collect a snapshot of the ad and its associated “Why am I seeing this ad?” option. Which they say matched their targeting parameters in the successfully nanotargeted cases.
“The main conclusions derived from our experiment are the following: (i) nanotargeting a user on FB is highly likely if an attacker can infer 18+ interests from the targeted user; (ii) nanotargeting is extremely cheap, and (iii) based on our experiments, 2/3 of the nanotargeted ads are expected to be delivered to the targeted user in less than 7 effective campaign hours,” they add in a summary of the results.
In another section of the paper discussing countermeasures to prevent nanotargeting, the researchers argue that Facebook’s claimed limits on audience size “have been proven to be completely ineffective” — and assert that the tech giant’s limit of 20 is “not currently being applied”.
They also suggest there are workarounds for the limit of 100 that Facebook claims it applies to Custom Audiences (another targeting tool that involves advertisers uploading PII).
From the paper:
“The most important countermeasure Facebook implements to prevent advertisers from targeting very narrow audiences are the limits imposed on the minimum number of users that can form an audience. However, those limits have been proven to be completely ineffective. On the one hand, Korolova et. al state that, motivated by the results of their paper, Facebook disallowed configuring audiences of size smaller than 20 using the Ads Campaign Manager. Our research shows that this limit is not currently being applied. On the other hand, FB enforces a minimum Custom Audience size of 100 users. As presented in Section 7.2.2, several works in the literature showed different ways to overcome this limit and implement nanotargeting ad campaigns using Custom Audiences.
While the researchers refer throughout their paper to interest-based data as “non-PII” [aka, personally identifiable information] it is important to note that that framing is meaningless in a European legal context — where the law, under the EU’s General Data Protection Regulation (GDPR), takes a much broader view of personal data.
PII is a more common term in the US — which does not have comprehensive (federal) privacy legislation equivalent to the pan-EU GDPR.
Adtech companies also typically prefer to refer to PII, given it’s far more bounded a category vs all the information they actually process which can be used to identify and profile individuals to target them with ads.
Under the GDPR, personal data does not only include the obvious identifiers, like a person’s name or email address (aka ‘PII’), but can also encompass information that can be used — indirectly — to identify an individual, such as a person’s location or indeed their interests.
Here’s the relevant chunk from the GDPR (Article 4(1)) [emphasis ours]:
“‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;”
Other research has also repeatedly — over decades — shown that re-identification of individuals is possible with, at times, just a handful of pieces of ‘non-PII’ information, such as credit card metadata or Netflix viewing habits.
So it should not surprise us that Facebook’s vast people profiling, ad targeting empire, which continuously and pervasively mines Internet users’ activity for interest-based signals (aka, personal data) to profile individuals for the purpose of targeting them with ‘relevant’ ads, has created a new attack vector for — potentially — manipulating almost anyone in the world if you know enough about them (and they have a Facebook account).
But that does not mean there are no legal problems here.
Indeed, the legal basis that Facebook claims for processing people’s personal data for ad targeting has been under challenge in the EU for years.
Legal basis for ad targeting
The tech giant used to claim that users consent to their personal data being used for ad targeting. However it does not offer a free, specific and informed choice to people over whether they want to be profiled for behavioral ads or just want to connect with their friends and family. (And free, specific and informed is the GDPR standard for consent.)
If you want to use Facebook you have to accept your information being used for ad targeting. This is what EU privacy campaigners have dubbed ‘forced consent‘. Aka, coercion, not consent.
However, since the GDPR came into application (back in May 2018), Facebook has — seemingly — switched to claiming it’s legally able to process Europeans’ information for ads because users are actually in a contract with it to receive ads.
A preliminary decision by Facebook’s lead EU regulator, Ireland’s Data Protection Commission (DPC), which was published earlier this week, has proposed to fine the company $36M for not being transparent enough about that silent switch.
And while the DPC doesn’t seem to have a problem with Facebook’s ad contract claim, other European regulators disagree — and are likely to object to Ireland’s draft decision — so the regulatory scrutiny over that particular Facebook GDPR complaint is ongoing and far from over.
If the tech giant is ultimately found to be bypassing EU law it could finally be forced to give users a free choice over whether their information can be used for ad targeting — which would essentially blast an existential hole in its ad targeting empire, since even holding a few pieces of interest data is personal data, as this research underlines.
For now, though, the tech giant is using its customary tactic of denying there’s anything to see here.
In a statement responding to the research, a Facebook spokesperson dismissed the paper — claiming it is “wrong about how our ad system works”.
Facebook’s statement goes on to try to divert attention from the researchers’ core conclusions in an effort to minimize the significance of their findings — with its spokesperson writing:
“This research is wrong about how our ad system works. The list of ads targeting interests we associate with a person are not accessible to advertisers, unless that person chooses to share them. Without that information or specific details that identify the person who saw an ad, the researchers’ method would be useless to an advertiser attempting to break our rules.”
Responding to Facebook’s rebuttal, one of the paper’s authors — Angel Cuevas — described its argument as “unfortunate” — saying the company should be deploying stronger countermeasures to prevent the risk of nanotargeting, rather than trying to claim there is no problem.
In the paper the researchers identify a number of harmful risks they say could be associated with nanotargeting — such as psychological persuasion, user manipulation and blackmailing.
“It is surprising to find that Facebook is implicitly recognizing that nanotargeting is feasible and the only countermeasure is assuming advertisers are unable to infer users interests,” Cuevas told TechCrunch.
“There are many ways interests could be inferred by advertisers. We did that in our paper with a browser plug-in (with explicit consent from users for research purposes). Even more, beyond interests there are other parameters (we did not use in our research) such as age, gender, city, zip code, etc.
“We think this is an unfortunate argument. We believe a player like Facebook can implement stronger countermeasures than assuming advertisers are unable to infer user interests to be later used to define audiences in the Facebook ads platform.”
One might recall — for example — the 2018 Cambridge Analytica Facebook data misuse scandal, where a developer that had access to Facebook’s platform was able to extract data on millions of users, without most of the users’ knowledge or consent — via a quiz app.
So, as Cuevas says, it’s not hard to envisage similarly opaque and underhand tactics being deployed by advertisers/attackers/agents to harvest Facebook users’ interest data to try to manipulate specific individuals.
In the paper the researchers note that a few days after their nanotargeting experiment had ended Facebook shuttered the account they’d used to run the campaigns — without explanation.
The tech giant did not respond to specific questions we put to it about the research, including why it closed the account — and, if it did so because it had detected the nanotargeting issue, why it failed to prevent the ads running and targeting a single user in the first place.
What might the wider implications be for Facebook’s business as a result of this research?
One privacy researcher we spoke to suggested the research will certainly be useful for litigation — which is growing in Europe, given the slow pace of privacy enforcement by EU regulators against Facebook specifically (and adtech more generally).
Another pointed out that the findings underline how Facebook has the ability to “systematically re-identity” users at scale — “while pretending it does not process ‘personal data’ on the data” — suggesting the tech giant has amassed enough data on enough people that it can, essentially, circumvent narrowly bounded legal restrictions that might seek to put limits on its processing of PII.
So regulators looking to put meaningful limits on harms that can flow from behavioral advertising will need to be wise to how Facebook’s own algorithms can seek out and make use of proxies in the masses of data it holds and attaches to users — and its likely line of associated argument that its processing therefore avoids any legal implications (a tactic Facebook has used on the issue of inferred sensitive interests, for example).
Another privacy watcher, Dr Lukasz Olejnik, an independent privacy researcher and consultant, called the research staggering — describing the paper as among the top ten most important privacy research results of this decade.
“Reaching 1 user out of 2.8bn? While the Facebook platform claimed there are precautions making such microtargeting impossible? So far, this is among the top 10 most important privacy research results in this decade,” he told TechCrunch.
“It seems that users are identifiable by their interests in the meaning of article 4(1) of the GDPR, meaning that interests constitute personal data. The only caveat is that we are not certain how such a processing would scale [given the nanotesting was only tested on three users].”
Olejnik said the research shows the targeting is based on personal data — and “perhaps even special category data in the meaning of GDPR Article 9”.
“This would mean that the user’s explicit consent is needed. Unless of course appropriate protections were made. But based on the paper we conclude that these, if present, are not sufficient,” he added.
Asked if he believes the research indicates a breach of the GDPR, Olejnik said: “DPAs should investigate. No question about it,” adding: “Even if the matter may be technically challenging, building a case should take two days max.”
We flagged the research to Facebook’s lead DPA in Europe, the Irish DPC — asking the privacy regulator whether it would investigate to determine if there had been a breach of the GDPR or not — but at the time of writing it had not responded.
Towards a ban on microtargeting?
On the question of whether the paper strengthens the case for outlawing microtargeting, Olejnik argues that curbing the practice “is the way forward”– but says the question now is how to do that.
“I don’t know if the current industry and political environment would be prepared for a total ban now. We should demand technical precautions, at the very least,” he said. “I mean, we were already told that these were in place but it appears this is not the case [in the case of nanotargeting on Facebook].”
Olejnik also suggested there could be changes coming down the pipe based on some of the ideas built into Google’s Privacy Sandbox proposal — which has, however, been stalled as a result of adtech complaints triggering competition scrutiny.
Asked for his views on a ban on microtargeting, Cuevas told us: “My personal position here is that we need to understand the tradeoff between privacy risks and economy (jobs, innovation, etc). Our research definitely shows that the adtech industry should understand that just thinking of PII information (email, phone, postal address, etc.) is not enough and they need to implement more strict measures regarding the way audiences can be defined.
“Saying that, we do not agree that microtargeting — understood as the capacity of defining an audience with (at least) tens of thousands of users — should be banned. There is a very important market behind microtargeting that creates many jobs and this is a very innovative sector that does interesting things that are not necessarily bad. Therefore, our position is limiting the potential of microtargeting to guarantee the privacy of the users.”
“In the area of privacy we believe the open question that is not solved yet is the consent,” he also said. “The research community and the adtech ecosystem have to work (ideally together) to create an efficient solution that obtains the informed consent from users.”
Zooming out, there are more legal requirements looming on the horizon for AI-driven tools in Europe.
Incoming EU legislation for high risk applications of artificial intelligence — which was proposed earlier this year — has suggested a total ban on AI systems that deploy “subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”.
So it’s at least interesting to speculate whether Facebook’s platform might face a ban under the EU’s future AI Regulation — unless the company puts proper safeguards in place that robustly prevent the risk of its ad tools being used to blackmail or psychologically manipulate individual users.
For now, though, it’s lucrative business as usual for Facebook’s eyeball targeting empire.
Asked about plans for future research into the platform, Cuevas said the obvious next piece of work they want to do is to combine interests with other demographic information to see if nanotargeting is “even easier”.
“I mean, it is very likely that an advertiser could can combine the age, gender, city (or zip code) of the user with a few interests to nanotarget a user,” he suggested. “We would like to understand how many of these parameters you need to combine. Inferring the gender, age, location and few interests from a user may be much easier than inferring few tens of interests.”
Cuevas added that the nanotargeting paper has been accepted for presentation at the ACM Internet Measurement Conference next month.