Blogs | Privacy and data protection | Artificial intelligence (AI) | Disinformation and electoral interference | Freedom of expression online | Platform regulation

Digital rights as a security objective: New gateways for attacks

Violations of human rights online, most notably the right to data protection, can pose a real threat to electoral security and societal polarisation.

By EDRi · December 19, 2018

In this series of blogposts, we’ll explain how and why digital rights must be treated as a security objective instead. The second part of the series explains how encroaching on digital rights could create new gateways for attacks against our security.

----------------------------------------------------------------- Support our work with a one-off-donation! https://edri.org/donate/ -----------------------------------------------------------------

In the first part of this series, we analysed the failure of the Council of the European Union to connect the obvious dots between ePrivacy and disinformation online, leaving open a security vulnerability through a lack of protection of citizens. However, a failure to act is not the only front on which the EU is potentially weakening our security on- and offline: on the contrary, some of the EU’s more actively pursued digital policies could have unintended, yet serious consequences in the future. Nowhere is this trend more visible than in the recent trust in filtering algorithms, which seem to be the new “censorship machine” that is proposed as a solution for almost everything, from copyright infringements to terrorist content online.

Article 13 of the Copyright Directive proposal and the Terrorist Content Regulation proposal are two examples of the attempt to regulate the online world via algorithms. While having different motivations, both share the logic of outsourcing accountability and enforcement of public rules to private entities who will be the ones deciding about the availability of speech online. They, explicitly or implicitly, advocate for the introduction of technologies that detect and remove certain types of content: upload filters. They empower internet companies to decide which content will stay online, based on their terms of service (and not law). In a nutshell, public institutions are encouraging Google, Facebook and other platform giants to become the judge and the police of the internet. In turn, they undermine the presumption that it should be democratically legitimise states, not private entities, who are tasked with the heavy burden of balancing the right to freedom of expression.

Even more chilling is the outlook of upload filters creating new entry points for forces that seek to influence societal debates in their favour. If algorithms will be the judges of what can or cannot be published, they could become the target of the next wave of election interference campaigns, with attackers instigating them to take down critical or liberal voices to influence debates on the internet. Despite continuous warnings about the misuse of personal data on Facebook, it only took us a few years to arrive at the point of Cambridge Analytica. How long will it take us to arrive at a similar point of election interference through upload filters in online platforms?

If we let this pre-emptive and extra-judicial censorship happen, it would likely result in severe detriments to the freedom of speech and right to information of European citizens, and the free flow of information would, in consequence, be stifled. The societal effects of this could be further aggravated by the introduction of a press publishers right (Article 11 of the Copyright Directive) that is vividly opposed by the academic world, as it will concentrate the power over what appears in the news in ever fewer hands. Especially in Member States where media plurality and independence of bigger outlets from state authorities are no longer guaranteed, a decline in societal resilience to authoritarian tendencies is unfortunately easy to imagine.

We have to be very clear about what machines are good at and what they are bad at: Algorithms are incredibly well suited to detect patterns and trends, but cannot and will not be able perform the delicate act of balancing our rights and freedoms in accordance with the law any time soon. We thus have to realise that measures that encroach on citizen’s digital rights with automated decisions, no matter in what context, always carry a risk of creating new attack vectors for those that seek to negatively influence and hijack our democratic debates. With a blind trust in privatised enforcement and algorithmic accuracy, we will not end up with a society that is more resilient to uninformed misinformation and maliciously crafted disinformation, but with the exact opposite.

Digital rights as a security objective: Fighting disinformation (05.12.2018)
https://edri.org/digital-rights-as-a-security-objective-fighting-disinformation/

Terrorist Content Regulation – Prior authorisation of all uploads? (21.11.2018)
https://edri.org/terrorist-content-regulation-prior-authorisation-for-all-uploads/

How the EU Copyright proposal will hurt the web and Wikipedia (02.07.2018)
https://edri.org/how-the-eu-copyright-proposal-will-hurt-the-web-and-wikipedia/

Upload filters endanger freedom of expression (16.05.2018)
https://edri.org/upload-filters-endanger-freedom-of-expression/

(Contribution by Yannic Blaschke, EDRi intern)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner