Platform Regulation: Key takeways from Haugen’s hearing
On 8 November 2021, Frances Haugen, the Facebook whistleblower, participated in a hearing of the European Parliament’s Internal Market and Consumer Protection Committee (IMCO). While her testimony brought extremely important insights into Facebook’s opaque operations, it also showed that Haugen’s thinking of what the digital world in Europe should look like is influenced by her expertise in data science rather than public policy, as well as by her professional experience working with Silicon Valley’s centralised mega-platforms.
Frances Haugen, the Facebook whistleblower, has been the latest of a number of brave whistleblowers that have exposed corporate and government structures of technological oppression.
On 8 November 2021, Haugen participated in a hearing of the European Parliament’s Internal Market and Consumer Protection Committee (IMCO). The attention that hearing attracted was justified, as her insights into how Facebook’s algorithms are driven by greed and hurt users and our democracy are a real wake-up call for political action in the EU.
She repeatedly warned Members of the European Parliament that focussing on speedy and uncontrolled content removal solutions is not going to solve the problems we face. The Digital Services Act (DSA), along with the Digital Markets Act, and the ePrivacy regulation, needs to tackle the systemic risks posed by mega-platforms such as Facebook. EDRi and its 45 member organisations fully agree with this message.
However, while her testimony brought extremely important insights into Facebook’s opaque operations, it also showed that Haugen’s thinking of what the digital world in Europe should look like is influenced by her expertise in data science rather than public policy, as well as by her professional experience working with Silicon Valley’s centralised mega-platforms.
Human-scale social media: How would that look like?
“I’m not asking you to give up social media, I’m asking you to enable human-scale social media,” was one of Haugen’s key points, and from the perspective of a healthy European digital sphere. We very much agree with her.
Surprisingly though, Haugen dismissed one of the few viable ways for us to achieve “human-scale social networking”: enabling people to decentralise their social interactions through mandatory interoperability for the largest platforms. Haugen instead seemed to support a model of centralised superpower where a few (hopefully) benevolent people dictate the terms under which the world’s population communicates – hardly a model Europe would want to aspire to.
In her speech, she raised the issue of content deletion across interoperable social networks, an issue that in fact has already been solved in similar contexts. The argument also falsely implies that currently, Facebook users had no guarantee that their content will be purged from the internet when they demand it. In fact, there is ample evidence that Facebook just keeps everything. Not to mention that every piece of content posted on a public social networking site can be easily copy-pasted, screenshotted, and re-shared elsewhere by anyone anyway. No centralised system in the world can prevent that.
Haugen equally dismissed the benefit of allowing third-party recommender systems on very large online platforms, although they can achieve exactly what she calls for: enabling users to optimise their news feeds and timelines for goals other than ‘engagement’. This can protect people from the amplification of content they may not wish to see such as hate speech or disinformation.
Is surveillance advertising a problem?
Surveillance ads are what enables targeted manipulation and immense privacy breaches on Facebook (and across the internet). In her testimony, Frances Haugen’s proposals stayed in line with the DSA current version for algorithmic transparency, which falls short of the reform needed. Providing access to researchers will not by itself lead the platforms to fix the problems they created. Their impact on marginalised groups and democracies have been known long enough. Furthermore, it is unlikely that most people would read ad transparency reports and have a meaningful chance to understand how their most intimate personal information is being used to micro-target them; and even if they did, there would not be much they could do with that information.
Consumer organisations and other experts rightly warn that this opaque micro-targeting system is not only dangerous in political ads, but also exacerbates the asymmetry of power between businesses and consumers. It increases consumer vulnerability and creates a dystopian society where “everything we do online – and increasingly offline – is registered and analysed in the service of making us purchase more products and services.”
To make matters worse, the majority of this data-fuelled ad tech ecosystem is controlled by big data firms like Facebook who soak up advertising revenue that would otherwise finance content creators, publishers and quality journalism.
While Google’s and Facebook’s global ad revenues have surpassed US$ 200 billion annually, news publishers’ saw their ad revenue fall by 70 per cent between 2005 and 2018. No licensing agreement with Big Tech platforms can turn this trend around. What we need is to stop the surveillance ad business in which Facebook and Google will always dominate due to their privileged access to data. The DSA should give a tracking-free online advertising market back into the hands of those who provide high-quality ad space. Specifically, the DSA and the ePrivacy Regulation should lead to the phase-out of advertising based on personal data, including tracking and other inferred data. In order to ensure this, dark patterns practices must be banned, and automated signals (Do Not Track-like) and other privacy-by-design and by-default features must be imposed on browsers, OS, hardware and apps to guarantee people’s security and privacy.