Blogs | Information democracy | Privacy and data protection | Biometrics | Online tracking industry / AdTech | Profiling practices | Surveillance and data retention

Stalked by your digital doppelganger?

In this fourth installment of EDRi’s facial recognition and fundamental rights series, we explore what could happen if facial recognition collides with data-hungry business models and 24/7 surveillance.

By EDRi · January 29, 2020

In the business of violating privacy

Online platforms, advertisers and data brokers already rely on amassing vast amounts of intimate user data, which they use to sell highly-targeted adverts and to “personalise” services. This is not a by-product of what they do: data monetisation is the central purpose of the Facebooks and Googles of the world.

The Norwegian Consumer Council released a report on the extent of the damage caused by these exploitative and often unlawful ad-tech ecosystems. In fact, many rights violations are exacerbated by data-driven business models, which are systemically opaque, intrusive and designed to grab as much personal data as possible. We’ve all heard the saying: “if the product is free, then you’re the product.”

Already, your interactions build an eerie (and often inaccurate and biased) digital portrait every time you send an email, swipe right or browse the web. And now, through the dystopianly-named “BioMarketing”, advertisers can put cameras on billboards which instantly analyse your face as you walk by, in order to predict your age, gender – and even your ethnicity or mood. They use this data to “personalise” the adverts that you see. So should you be worried that this real-time analysis could be combined with your online interactions? Is your digital doppelganger capable of stepping off your screen and into the street?

When your digital doppelganger turns against you

Airbnb’s discrimination against sex workers is just one example of how companies already use data from other platforms to unjustly deny services to users who, in accessing the service, have breached neither laws nor terms of service. As thousands of seemingly innocuous bits of data about you from your whole digital footprint – and now even your doorbell – are combined, inferences and predictions about your beliefs, likes, habits or identity can be easily used against you.

The recent Clearview AI scandal has cast light on a shady data company, unlawfully scraping and analysing billions of facial images from social media and other platforms and then selling this to police departments. Clearview AI’s systems were sold on the basis of false claims of effectiveness, and deployed by law enforcement with flagrant disregard for data protection, security, safeguards, accuracy or privacy. The extent of online data-gathering and weaponisation may be even worse than we thought.

Surveillance tech companies are cashing in on this biometric recognition and data hype. Spain’s Herta Security and the Netherlands’ VisionLabs are just two of the many companies using the tired (and de-bunked) “security” excuse to justify scanning everyone in shops, stations or even just walking down the street. They sell real-time systems designed to exclude “bad” people by denying them access to spaces, and reward “good” people with better deals. Worryingly, this privatises decisions that have a serious impact on fundamental rights, and enables private companies to act as judge, jury and executioner of our public spaces.

It gets worse…

Surveillance tech companies are predictably evasive about where they get their data and how they train their systems. Although both Herta Security and VisionLabs advertise stringent data protection compliance, their claims are highly questionable: Herta Security, for instance, proudly offers to target adverts based on skin colour. VisionLabs, meanwhile, say that their systems can identify “returning visitors”. It’s hard to see how they could do this without holding on to personally-identifiable biometric data without people’s consent (which would, of course, be a serious breach of data protection law).

As if this wasn’t enough, VisionLabs also enthusiastically offer to analyse the emotions of shoppers. This so-called “affect recognition” is becoming increasingly common, and is based on incredibly dubious scientific and ethical foundations. But that hasn’t stopped it being used to assess everything from whether migrants are telling the truth in immigration interviews to someone’s suitability for a job.

Aren’t we being a bit paranoid?

In theory, a collision of biometric analysis with vast data sources and 24/7 surveillance is terrifying. But would anyone really exploit your online and biometric data like this?

Thanks to facial recognition, many online platforms already know exactly what you look like, and covertly assign highly-specific categories to your profile for advertising purposes. They know if you’re depressed or have a sexually-transmitted disease. They know when you had your last period. They know if you are susceptible to impulse buys. They infer if you are lonely, or have low self-esteem.

There is evidence, too, that facial recognition systems at international borders have now been combined with predictions scraped from covert data sources in order to label travellers as potential terrorists or undocumented migrants. Combine this with the fact that automated systems consistently assess black people as more criminal than white people, even if all other variables are controlled. Your digital doppelganger – inaccuracies, discriminatory judgements and all – becomes indelibly tied to your face and body. This will help law enforcement to identify, surveil, target and control even innocent people.

The violation of your fundamental rights

Given the huge impact that biometric identification systems have on our private lives, the question is not only how they work, but whether they should be allowed to. This data-driven perma-surveillance blurs the boundary between public and private control in dangerous ways, allowing public authorities to outsource responsibility to commercially-protected algorithms, and enabling private companies to commodify people and sell this back to law enforcement. This whole ecosystem fundamentally violates human dignity, which is essential to our ability to live in security and with respect for our private lives.

The ad-tech industry is a treasure trove for biometric surveillance tech companies, who can secretly purchase the knowledge, and therefore the power, to control your access to and interactions with streets, supermarkets and banks based on what your digital doppelganger says about you, whether true or not. You become a walking, tweeting advertising opportunity and a potential suspect in a criminal database. So the real question becomes: when will Europe put its foot down?

 

Read more in the Facial Recognition and Fundamental Rights series

10 reasons why online advertising is broken (08.01.2020)
https://medium.com/@ka.iwanska/10-reasons-why-online-advertising-is-broken-d152308f50ec

The EU is funding dystopian artificial intelligence projects (22.01.2020)
https://www.euractiv.com/section/digital/opinion/the-eu-is-funding-dystopian-artificial-intelligence-projects/

Facial Recognition Cameras Will Put Us All in an Identity parade (27.01.2020)
https://www.theguardian.com/commentisfree/2020/jan/27/facial-recognition-cameras-technology-police

Out of Control: How consumers are exploited by the online advertising industry (14.01.2020)
https://fil.forbrukerradet.no/wp-content/uploads/2020/01/2020-01-14-out-of-control-final-version.pdf

Amazon’s Rekognition Shows Its True Colours (15.01.2020)
https://edri.org/amazons-rekognition-shows-its-true-colors/

We’re Banning Facial Recognition. We’re Missing the Point (20.01.2020)
https://www.nytimes.com/2020/01/20/opinion/facial-recognition-ban-privacy.html

The Secretive Company That Might End Privacy as we Know It (18.01.2020) https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Adtech – the reform of real time bidding has started and will continue (17.01.2020)
https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2020/01/blog-adtech-the-reform-of-real-time-bidding-has-started/

Privacy International study shows your mental health is for sale (03.09.2019)
https://privacyinternational.org/long-read/3194/privacy-international-study-shows-your-mental-health-sale

Contribution by Ella Jakubowska, EDRi intern [at time of writing, now Policy and Campaigns Officer], with many ideas gratefully received from or inspired by members of the EDRi network