What are your plans for the coming days? We have a suggestion: The European elections will take place – and it’s absolutely crucial to go and vote!
In the past, the EU has often defended our digital rights and freedoms. This was possible because the Members of the European Parliament (MEPs) – who we, the EU citizens, elected to represent us in the EU decision-making – are open to hearing our concerns.
So, what exactly has the EU done for our digital rights?
The EU has possibly the best protection for citizens’ personal data: the General Data Protection Regulation (GDPR). This law was adopted thanks to some very dedicated European parliamentarians, and it enhances everyone’s rights, regardless of nationality, gender, economic status and so on. Since the GDPR came into effect, we now have for example the right to access our personal data a company or an organisation holds on us, the right to explanation and human intervention regarding automated decisions, and the right to object to profiling measures.
Europe has become a global standard-setter in the defence of the open, competitive and neutral internet. After a very long battle, and with the support of half a million people that responded to a public consultation, the principles that make the internet an open platform for change, freedom, and prosperity are upheld in the EU.
In June 2015, negotiations between the three European Union institutions led to new rules to safeguard net neutrality – the principle according to which everyone can communicate with everyone on the internet without discrimination. This principle was put at risk by the ambiguous, unbalanced EU Commission proposal, which would have undermined the way in which the internet functions. In 2016, the Body of European Regulators for Electronic Communications (BEREC) was tasked with publishing guidelines to provide a common approach to implementing the Regulation in the EU Member States. In June 2016, BEREC published the draft guidelines that confirm strong protections for net neutrality and open internet.
In 2012, the MEPs voted against an international trade agreement called the Anti-Counterfeiting Trade Agreement (ACTA), which, if concluded, would have likely resulted in online censorship. It would have had major implications for freedom of expression, access to culture and privacy, it will harm international trade and stifle innovation. Therefore, people decided to demonstrate and there were protests against this draft agreement in over 200 European cities calling for a rejection. In the end, the Parliament listened to the concerns of the people and voted against ACTA.
Whistleblowers fight for transparency, democracy and the rule of law, reporting unlawful or improper conduct that undermine the public interest and our rights and freedoms. In 2017, the European Parliament called on legislation to protect whistleblowers, making a clear statement recognising the essential role of whistleblowers in our society. This Resolution started the process of putting into place effective protections for whistleblowers throughout the EU. In April 2019, the Parliament adopted the new Directive, which is still to be approved by the EU Council.
Your vote matters for digital rights
In many occasions, the EU Parliamentarians have stood for our rights and freedoms. It’s important that also the new EU Parliament will be a strong defender of our digital rights – because there are so many important fights coming up.
The European elections are one of the rare occasions where we can take our future and the future of Europe into our own hands. Your vote matters. Please go and vote for digital rights on 23-27 May!
Remember in April 2018, after the Cambridge Analytica scandal broke, we sent a series of 13 questions to Facebook about their users’ data exploitation policy. Months later, Facebook got back to us with answers. Here is a critical analysis of their response.
Recognising people’s face without biometric data?
The first questions (1a and 1b) related to Facebook’s new facial recognition feature which scans every image uploaded to search for faces and compare them to those already in their database in order to identify users. Facebook claims that the identification process only works for users that explicitly consented to have the feature enabled and that the initial detection stage, during which the photograph is being analysed, does not involve the processing of biometric data. Biometric data is data used to identify a person through unique characteristics like fingerprints or facial features.
There are two issues here. First, contrary to what Facebook declared, the first batch of users for whom face recognition was activated received a notice, but were not asked for consent. All users were opted in by default, and only a visit to the settings page allowed them to say “no”. For the second batch of users, Facebook apparently decided to automatically opt-in only those accounts that had the photo tag suggestion feature activated, simply assuming that they wanted face recognition, too. Obviously, this does not constitute explicit consent under the General Data Protection Regulation (GDPR).
Second, even if Facebook does not manage to identify users who disabled the feature or people who are not users, their photos might still be uploaded and their faces scanned. No technology can determine whether an image contains only users who gave consent, without actually scanning every uploaded photo to search for facial features.
One of our questions related to the initial settings every Facebook user has when creating an account and their protection levels by default (question 3). Facebook responded that it has suspended the search for people by phone number in the Facebook search bar. Since Facebook responded to our questions in August 2018, it seems that it reinstated this function, set on “Everyone can look you up using your phone number” by default (see below Belgian account settings consulted lastly on 24 January 2019).
This reinstatement is probably linked to the upcoming merging between Facebook-owned messaging systems: Facebook Messenger, WhatsApp and Instagram messaging. Identification requirements for each messaging applications are different: a Facebook account for Messenger, a phone number for WhatsApp and an email for Instagram. The merging gives Facebook the possibility to intersect information and to connect several profiles under a single, unified identity. What is worse, Facebook now reportedly makes searchable phone numbers that users had provided for two-factor authentication, and there is no way to switch this feature off.
Other default privacy settings on Facebook are not protective either. The access to a user’s friend list is set to “publicly visible”, for example. Facebook justified the low privacy level by repeating that users join Facebook to connect with others. Nonetheless, even if users want to limit who can see their friend lists, people can see their Facebook friendships by looking at the publicly accessible friends lists of their friends. Some personal information will simply never be fully private under Facebook’s current privacy policies.
The Cambridge Analytica case
Facebook pleaded the misuse of its services and shifted the entire responsibility of the Cambridge Analytica scandal on the quiz application “This Is Your Digital Life” (our questions 4 and 5). The app requested permission from users to access their personal messages and newsfeed. According to Facebook, there was no unauthorised access to data as the consent was freely given by users. However, accessing one user’s newsfeed and personal messages also meant that the application could access received posts and messages, that is to say from users who did not consent. Once again, individual privacy is highly dependent on others’ carefulness. Facebook admitted that it wished it had notified earlier affected users who did not give consent. To our question why the appropriate national authorities were not notified of the incident immediately, Facebook gave no answer.
“This Is Your Digital Life” is just one application, but there may be many more that harvest similar amounts of personal data without the consent from users. Facebook assured that it made it harder for third parties to misuse its systems. Nevertheless, the limits to the processing of collected data by third parties remain unclear, and we received no answer about the current possibilities for other applications to share and receive users messages.
Facebook’s ad targeting practices
“Advertising is central not only to our ability to operate Facebook, but to the core service that we provide, so we do not offer the ability to disable advertising altogether.” If advertisement is non-negotiable (our question 9), Facebook explained that through its new Ad Preferences tool (our question 6) users can nevertheless decide whether or not they want to see ads that are targeted at them based on their interests and personal data. The Ad Preferences tool gives users control over the criteria used for targeted advertisement: data provided by the user, data collected from Facebook partners, and data based on the user’s activity on Facebook products. Users can also hide advertisement topics and disable advertisers with whom they interacted.
But if Facebook was treating ads settings the same way as privacy settings, as it claims to do, the default settings for a new user would look very different: For this article we created a new Facebook account and found that Facebook does not guide new users through the opt-in and opt-out options for privacy and ad settings. On the contrary, Facebook’s default ad settings involve the profiling of new users based on their relationship status, job title, employer and education (see new account settings below). Those defaults are clearly incompatible with the GDPR’s “privacy by default” requirement.
In the same vein, EDRi member Privacy International found that Facebook collects personal information on people who are logged out of Facebook or don’t even have a Facebook account. The social media company owns so many apps, “business tools” and services that it is capable of tracking users, non-users and logged-out users across the internet. Facebook doesn’t seem to be willing to change its business practices to respect people’s privacy. Privacy is not about what Facebook users can see from each other but what information is accessed and used by third parties and for which purposes without the users’ knowledge or consent.
Profiling and automated decision-making
Article 22 of the GDPR introduces a right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or “similarly significant” effects for the user. We asked Facebook what measures it takes to make sure its ad targeting practices, notably for political ads, are compliant with this provision (question 7). In its answer, Facebook considers that its targeted ads based on automated decision-making do not have legal or similarly significant effects yet. In light of the numerous scandals the company has been facing around the manipulation of the 2016 U.S. elections and the Brexit referendum, this answer is quite surprising. Even though many would argue that the way Facebook targets voters with ads based on automated decision-making has indeed “similarly significant”, if not legal effects for its users and societies as a whole. But Unfortunately, Facebook doesn’t seem to consider it should change its ad targeting practices.
Special categories of data
Article 9 of the GDPR defines special categories of particularly sensitive data that include racial or ethnic origin, political opinions, religious beliefs, health, sexual orientation and biometric data. Facebook says that without the user’s explicit consent to use such special categories of data, they will be deleted from respective profiles and Facebook’s servers (our question 2.a).
What Facebook doesn’t say, is that users don’t even need to share this information in order for the platform to monetise it. Facebook can simply deduce religious views, political opinions and health data from based on which third-party websites they visit, what they write in Facebook posts, what they comment on and share: Facebook does not need users to fill in their profile fields when it can infer extremely sensitive information from all other data users generate on the platform day in day out. Facebook can then assign different ad preferences (such as “LGBT community”, “Socialist Party”, “Eastern Orthodox Church”) based on each user’s online activities, without asking for consent at all, and exploit it for advertising purposes. Researchers argue the practice of labelling Facebook users with ad preferences associated with special categories of personal data may be in breach of Article 9 of the GDPR because no other legal basis than explicit consent could allow this form of use. In its reply to our questions, Facebook voluntarily omitted their use of sensitive data derived from user behaviour, posts, comments, likes and so on to feed its marketing profiles. It is too easy to focus on the tip of the iceberg.
Right to access
Replying to our request on the right to access, download, erase or modify personal data, Facebook described its three main tools, Download Your Information (DYI), Access Your Data (AYD) and Clear History (our question 8). According to Facebook, DYI provides the user with all the data each user provided on the platform. But as explained above, this does not include information inferred by the platform based on user behaviour, posts, comments, likes and so on, nor information provided by friends or other users, such as tags in photos or posts.
The internet is an incredible tool and has empowered women to speak up, react and organise to face patriarchy and oppression. But the internet is not a neutral place – sexist, racist, homophobic and other violent types of behaviour and content are disproportionately affecting women. This International Women’s Day, we would like to celebrate positive stories and provide practical tips, accessible tools and material for women’s digital safety, security and privacy.
Women are more likely to be subject to online harassment and violence, massive campaigns of abuse and intimidation, or exploitation and manipulation of private data. An Amnesty International report found that women of colour, women with disabilities, lesbian, bisexual, trans women and women at the intersection of forms of oppression are even more targeted. Factors are manifold: little accountability of malicious attackers leading to a feeling of impunity, or the lack of knowledge of companies and developers about violence and abuse on their infrastructures. Victims are left with little support for the violence they’ve encountered. This leads women to self-censor, restrict their freedom of expression and their meaningful participation online.
Browsing safely and anonymously
When browsing the web, personal data and internet activity are being collected and recorded. Websites collect data such as demographics, intimate interests and tastes, personal habits and hobbies. This enormous amount of personal data includes sensitive information like credit card data, physical location, sexual preferences, religion, health and others. This information is extremely valuable to companies, governments and malicious actors alike and can be exploited and facilitate targeted attacks on women. One part of the solution is to use encryption. Using encryption is not as hard as it seems: Start with HTTPS Everywhere, a browser add-on that tells websites you visit to use encryption when available (a browser add-on is a small programme that customises your browser’s behaviour).
The infamous cookies are small pieces of data stored by websites on your devices and originally designed to remember your previous choices on a website such as form fields, shopping card items and language choice. Today, they are often used by third parties to assign you a unique identifying number which helps advertisement companies to follow you around across the web. While you probably want to allow some of the useful cookies on shopping portals and other websites, it’s definitely a good idea to block all third party cookies. This can be done directly in your browser settings.
Other forms of snooping include website trackers which are mostly used by advertisement companies. Trackers are little snippets of computer code often invisibly embedded in advertisement on all kinds of websites including your favourite newspaper, shopping site and social network. Trackers are often served by a third-party such as Google or Facebook rather than by the original owner of a website. You know those “Like” buttons you find all over the web? That’s actually a tracker telling Facebook which sites you’ve visited and which newspaper articles you’ve read. Luckily, two simple browser add-ons will help you block undesired trackers: Install Privacy Badger and Ublock Origin and you’re good to go.
in order to increase anonymity, you can use the Tor network or a
Network. Those tools are particularly tailored and recommended
for politically active women, human rights defenders or even women
fearing for their safety. More information can be found here
For women especially, the collection of data for commercial purposes can be very intrusive. Many doubts have been cast on menstruapps, which are very popular health-related mobile applications helping women to monitor their menstrual cycles. Not only do these apps know about the time period, but also invite users to share very intimate details about their periods like symptoms or sexual drive. Menstruation, pregnancy, online dating and many more aspects of women’s lives are turned into marketing targets. Another advice: never blindly trust mobile apps.
Lastly, it is important to note that websites often request too much information about users in order for us to be allowed to use of the service. More than just an email address and a password, websites may require a name, a location, and other unnecessary details. A good rule to follow is to only give personal information that is absolutely necessary – an email address to receive a registration confirmation or to retrieve a password for example. The rest is up to one’s imagination and creativity: fake address, fake birth date, etc. Faking means lowering the risk of having personal information possibly compromised.
accounts and communications
Staying safe online also means protecting your communications and accounts against identity theft and hacking. When it comes to securing personal accounts, strong passwords are key. Here are the latest rules to create super strong passwords. Don’t use the same password across websites and services, and if you have more passwords than you can remember, use a password manager that keeps them all in one secure place for you. Another good practice to reduce the risk of hacking is to activate two-factors authentication when it is available: after entering a password, you will receive a second code on a different device or service.
Intimate communications such as explicit pictures are particularly vulnerable content that can be used for all kinds of harassment practices such as “doxxing” (blackmail) or “revenge porn”. Specific advice on how to do sexting safely can be found here.
it comes to gaming, and especially multiplayer games, the experience
for women can be less than enjoyable. In order to stay safe from
harassment or sexism, there are a couple of things that you can put
in place: You can make use of games’ reporting systems, mute an
individual player in the chat function, don’t use your real name
but instead register with a pseudonym that does not hint to your
use a gamertag that you already use in other social media profiles,
don’t use a real photo of yourself for your profile, and don’t
give away any personal information in chats, such as your phone
number or location.
and recovering from online harassment
Women – and in particular women of colour, women with disabilities and lesbian, bisexual or trans women – represent the majority of harassment and violence targets. As a consequence, many women’s experience on social media leads them to self-censor what they post, and sometimes even delete their account. If you’re experiencing harassment on social media platforms such as Twitter, there are possibilities to cope with the situation and fight back. For example, victims can ask platforms to delete, suspend or send a warning to harassing accounts. HeartMob is a supportive tool where people can document the harassment they are experiencing on social media and request the support they need from an online community.
For women who are human rights defenders or political activists, taking action on this issue may include developing fully-fledged security and protection strategies for human rights defenders. Threats, incitement to rape or any form of violence is illegal and can be notified to law-enforcement authorities. Victims-support NGOs and services can assist you.
Security in a Box covers both digital security tools and tactics, including advice on how to use social media and mobile phones more safely as well as step-by-step instructions to install, configure and use digital security software and services.
Exposing the Invisible is a platform gathering lots of materials and projects related to digital security and privacy.
During the past year, our work to defend citizens’ rights and freedoms online has gained an impressive visibility – we counted more than three hundred mentions! – in European and international media. Below, you can find our press review 2018.
A test by EDRi member noyb, a European non-profit organisation for privacy enforcement, shows structural violations of most streaming services. In more than ten test cases noyb was able to identify violations of Article 15 of the General Data Protection Regulation (GDPR) in many shapes and forms by companies like Amazon, Apple, DAZN, Spotify or Netflix. On 18 January 2019, noyb filed a wave of ten strategic complaints against eight companies.
Under the new GDPR, users enjoy a “right to access”. Users are granted a right to get a copy of all raw data that a company holds about them, as well as additional information about the sources and recipients of the data, the purpose for which the data is processed or information about the countries in which the data is stored and how long it’s stored. This “right to access” is enshrined in Article 15 of the GDPR and Article 8(2) of the Charter of Fundamental Rights of the European Union.
noyb put eight online streaming services from eight countries to the test – but no service fully complied. In eight out of eight cases, noyb filed formal complaints with the relevant data protection authorities.
All major providers even engaged in ‘structural violation’ of the law
said Max Schrems, Director of noyb.
While many smaller companies manually respond to GDPR requests, larger services like YouTube, Apple, Spotify or Amazon have built automated systems that claim to provide the relevant information. When tested, none of these systems provided the user with all relevant data.
“Many services set up automated systems to respond to access requests, but they often don’t even remotely provide the data to which every user has a right. In most cases, users only got the raw data, but, for example, no information about who this data was shared with. This leads to structural violations of users’ rights, as these systems are built to withhold the relevant information,” said Schrems.
While all other streaming services have provided some response to the request of users to access their data at least, the United Kingdom sports streaming service DAZN and the German music streaming service SoundCloud simply ignored the request . However, the responses received were lacking background information, such as the sources and recipients of data or on how long data is actually stored (“retention period”). In many cases, the raw data was provided in cryptic formats that made it extremely hard or even impossible for an average user to understand the information. In many cases certain types of raw data were also missing.
noyb has filed complaints with the Austrian Data Protection Authority (dsb.gv.at) against eight companies, on behalf of ten users. The Austrian authority will have to cooperate with the relevant authorities at the main establishment of each streaming service. As GDPR foresees 20 million euro or 4% of the worldwide turnover as a penalty, the theoretical maximum penalty across the ten complaints could be 18,8 billion euro.
The right to access is a cornerstone of the data protection framework. Only when users can get an idea of how and why their data is stored or shared, they can realistically uncover violations of GDPR and consequently take action. Every user has the right to get a copy of his or her data and to receive additional information. Usually users can fill out a form or send an email to most services. noyb has collected the links and forms for major streaming services on its webpage for everyone to use.
Article 80 of the GDPR foresees that data subjects can be represented by a non-profit association, as individual users are usually unable to file the relevant legal complaints. In this case all ten users are represented by the non-profit organisation noyb.
“noyb is meant to reasonably enforce the new rules, so that the benefits actually reach the users,” Schrems said.
noyb.eu is funded by over 3100 individual supporting members and sponsors. In order to finance the fight against data breaches in the long term, the association is looking for more supporting members. “In 1995 the EU already passed data protection laws, but they were simply ignored by the big players. We now have to make sure this does not happen again with GDPR – so far many companies only seem to be superficially compliant,” said Schrems.
More and more women use a period tracker: an app that keeps track of your menstrual cycle. However, these apps do not always treat the intimate data that you share with them carefully.
An app that notifies you when to expect your period or when you are fertile can be useful, for example to predict when you can expect to suffer the side effects that for a lot of women come with being on your period. In itself, keeping track of your cycle is nothing new: putting marks in your diary or on your calendar have always been an easy way to take your cycle into account. But sharing data on the workings of your body with an app is more risky.
There seems to be quite a large market for period tracker apps. From “Ladytimer Maandstonden Cyclus Kalender” to “Magic Teen Girl Period Tracker”, from “Vrouwenkalender” to “Flo” – all neatly lined up in different shades of pink in the appstore. “Femtech” is seen as a growing market that has raised billion-dollar investments over the last couple of years by different startups. Are these apps made to provide women with more insight into the workings of their bodies, or to monetise that need?
It’s interesting to look at the kind of data these apps collect. The app usually opens with a calendar overview. In the overview you can input the date of your last period. In addition, you can keep a daily record of how you feel (happy, unhappy, annoyed) and whether you experience blood loss. But for most of these apps it doesn’t end there. Have you had sex? And if so, with or without protection? With yourself or with another person? How would you grade the orgasm? Did you have a stomach ache? Were your bowel movements normal? Did you feel like having sex? Sensitive breasts? An acne problem? Did you drink alcohol? Exercise? Did you eat healthy?
For a number of these questions it is understandable why answering them might be useful, if the app wants to learn to predict in what stage of your cycle you are. But a lot of these questions are quite intimate. And all this sensitive data often seems to end up in possession of the company behind the app. The logical question then is: What exactly does a company do with all this data you hand over? Do you have any say in that? Do they treat it carefully? Is the data shared with other parties?
After digging through a number of privacy statements, it appears that one of the most used apps in the Netherlands, “Menstruatie Kalender”, gives Facebook the permission to show in-app advertisements. It’s not clear what information Facebook gathers about you from the app to show you advertisements. For example, does Facebook get information on when you are having your period?
Another frequently used app in the Netherlands is “Clue”. It’s the only one we found that has a comprehensive and easily readable privacy statement. You can use the app without creating an account in which case data is solely stored locally on your phone. If you do choose to create an account you give explicit consent to share your data with the company. In that case it is stored on secure servers. With your consent it will also be used for academic research into women’s health.
This can not be said of many other apps. Their privacy statements are often long and difficult to read, and require good reading-between-the-lines skills to understand that data is being shared with “partners”. It’s possible that the sensitiveness of your breasts in itself is not very interesting to an advertiser, but by keeping track of your cycle the apps automatically acquire information on the possible start of one of the most interesting periods of your life for marketeers: motherhood.
The most extreme example is Glow, the company behind the period tracker app “Eve”. Their app is focused on the potential desire to have children. The company’s tagline is as straightforward as they come: “Women are 40% more likely to conceive when using Glow as a fertility tracker”. Besides Eve, Glow has three other apps: an ovulation and fertility tracker, a baby tracker and a pregnancy tracker. The apps link to the Glow-community, a network of forums where hundreds of women share their experiences and give each other tips.
But that’s not the only thing that Glow offers. You can’t use a Glow webpage or app without being shown the “Fertility Program”. For 1200-7000 euro, you can enroll in different fertility programs. Too expensive? You are able to take out a cheap loan through a partnership with a bank. And in the end, freezing your eggs, if you are in your early thirties, is the most economically viable option, according to the website.
Turns out that Glow is a company selling fertility products. It has built a number of apps to subtly (and sometimes not so subtly) attract more female customers. As a consumer you think you are using an app for keeping track of your cycle, but in the meantime you are constantly notified of all the possibilities of freezing your eggs, the costs of pregnancy at a higher age, and your limited fertile years. Before you know it, you are lying awake at age 30, wondering whether it would be more “economical” to freeze your eggs.
These apps shed light on what seems to be a contract to which we are forced to consent more and more often. In exchange for the use of an app that makes our lives a little bit easier, we have to give away a lot of personal information, without knowing exactly what happens with it. The fact that these apps deal with intimate information doesn’t mean that the creators treat it more carefully. To the contrary: it increases the market value of that data.
So before you download one of these apps, or advise your daughter to download one, think again. Take your time to read an app’s privacy statement, to know exactly what the company does with your data. But there is also a responsibility for the regulatory body, such as the Autoriteit Persoonsgegevens in the Netherlands, to ensure companies don’t abuse your intimate data.
Are you using one of these apps and do you want to know which data the company has gathered on you, or do you want to have that data erased? You can easily draw up a request which you can send by mail or email using My Data Done Right.
Public consultations are an opportunity to influence future legislation at an early stage, in the European Union and beyond. They are your opportunity to help shaping a brighter future for digital rights, such as your right to a private life, data protection, or your freedom of opinion and expression.
Below you can find a list of public consultations we find important for digital rights. We will update the list on an ongoing basis, adding our responses and other information that can help you get engaged.
Call for input on the Body of European Regulators for Electronic Communications (BEREC) Work Programme 2020.
EDRi has joined a letter of 30 representatives from civil society and online industry, to the Ministers in the Telecoms Council, to express the wide support for the ePrivacy Regulation. The letter describes the clear and urgent need to strengthen privacy and security of electronic communications in the online environment, especially in the wake of repeated scandals and practices that undermine citizens’ right to privacy and the trust on online services.
The support from privacy-friendly businesses such as Qwant, Startpage, Startmail, TeamDrive, Tresorit, Tutanota, ValidSoft or WeTransfer show the positive implications that ePrivacy will have for a dynamic and innovative European internet industry. The collaboration between organisations defending citizens’ rights and industry representatives underlines that both EU citizens and privacy-friendly business models have much to gain from a strong ePrivacy Regulation.
EDRi full-heartedly supports the call of the coalition to the Council of Minister’s to finally move the ePrivacy discussion forward, so that a compromise with the European Parliament can be found before the elections in May 2019. If this is achieved, European citizens will benefit from a strong privacy regime and a less intrusive, more dynamic and more innovative EU data economy.
In October 2018, the United Nations (UN) Special Rapporteur for the promotion and protection of the right to freedom of opinion and expression, David Kaye, released his report on the implications of artificial intelligence (AI) technologies for human rights. The report was submitted to the UN General Assembly on 29 August 2018 but has only been published recently. The text focuses in particular on freedom of expression and opinion, privacy and non-discrimination. In the report, the UN Special Rapporteur David Kaye first clarifies what he understands by artificial intelligence and what using AI entails for the current digital environment, debunking several myths. He then provides an overview of all potential human rights affected by relevant technological developments, before laying down a framework for a human rights-based approach to these new technologies.
1. Artificial intelligence is not a neutral technology
David Kaye defines artificial intelligence as a “constellation of processes and technologies enabling computers to complement or replace specific tasks otherwise performed by humans” through “computer code […] carrying instructions to translate data into conclusions, information or outputs.” He states that AI is still highly dependent on human intervention, as humans need to design the systems, define their objectives and organise the datasets for the algorithms to function properly. The report points out that AI is therefore not a neutral technology, as the use of its outputs remains in the hands of humans.
Current forms of AI systems are far from flawless, as they demand human scrutiny and sometimes even correction. The report considers that AI systems’ automated character, the quality of data analysis as well as systems’ adaptability are sources of bias. Automated decisions may produce discriminatory effects as they rely exclusively on specific criteria, without necessarily balancing them, and they undermine scrutiny and transparency over the outcomes. AI systems also rely on huge amounts of data that has questionable origins and accuracy. Furthermore, AI can identify correlations that can be mistaken for causations. David Kaye points at the main problem of adaptability when losing human supervision: it poses challenges to ensuring transparency and accountability.
2. Current uses of artificial intelligence interfere with human rights
David Kaye describes three main applications of AI technology that pose important threats to several human rights.
The first problem raised is AI’s effect on freedom of expression and opinion. On one hand, “artificial intelligence shapes the world of information in a way that is opaque to the user” and conceals its role in determining what the user sees and consumes. On the other, the personalisation of information display has been shown to reinforce biases and “incentivize the promotion and recommendation of inflammatory content or disinformation in order to sustain users’ online engagement”. These practices impact individuals’ self-determination and autonomy to form and develop personal opinions based on factual and varied information, therefore threatening freedom of expression and opinion.
Secondly, similar concerns can be raised in relation to our right to privacy, in particular with regard to AI-enabled micro-targeting for advertisement purposes. As David Kaye states, profiling and targeting users foster mass collection of personal data, and lead to inferring “sensitive information about people that they have not provided or confirmed”. The few possibilities to control personal data collected and generated by AI systems put into question the respect of privacy.
Third, the Special Rapporteur highlights AI as an important threat to our rights to freedom of expression and non-discrimination due to AI’s increasingly-allocated role in the moderation and filtering of content online. Despite some companies’ claims that artificial intelligence can support exceeded human capacities, the report sees the recourse to automate moderation as impeding the exercise of human rights. In fact, artificial intelligence is unable to resist discriminatory assumptions or to grasp sarcasm and the cultural context for each piece of content published. As a result, freedom of expression and our right not to be discriminated against can be severely hampered by delegating complex censorship exercises to AI and private actors.
3. A set of recommendations for both companies and States
Recalling that “ethics” is not a cover for companies and public authorities to neglect binding and enforceable human rights-based regulation, the UN Special Rapporteur recommends that “any efforts to develop State policy or regulation in the field of artificial intelligence should ensure consideration of human rights concerns”.
David Kaye suggests human rights should guide development of business practices, AI design and deployment and calls for enhanced transparency, disclosure obligations and robust data protection legislation – including effective means for remedy. Online service providers should make clear which decisions are made with human review and which by artificial intelligence systems alone. This information should be accompanied by explanations of the decision-making logic used by algorithms. Further, the “existence, purpose, constitution and impact” of AI systems should be disclosed in an effort to improve the level of individual users’ education around this topic. The report also recommends to make available and publicise data on the “frequency at which AI systems are subject to complaints and requests for remedies, as well as the types and effectiveness of remedies available”.
States are identified as key actors responsible for creating a legislative framework hospitable to a pluralistic information landscape, preventing technology monopolies and supportive of network and device neutrality.
Lastly, the Special Rapporteur provides useful tools to oversee AI development:
human rights impact assessments performed prior, during and after the use of AI systems;
external audits and consultations with human rights organisations;
enabled individual choice thanks to notice and consent;
effective remedy processes to end human rights violations.
It’s been six-hundred-fifty-two days since the European Commission launched its proposal for an ePrivacy Regulation. The European Parliament took a strong stance towards the proposal when it adopted its position a year ago, but the Council of the European Union is still only taking baby steps towards finding its position.
In their latest proposal, the Austrian Presidency of the Council continues, unfortunately, the trend of presenting the Council with suggestions that lower privacy protections that were proposed by the Commission and strengthened by the Parliament. In the latest working document that was published on 19 October 2018, it becomes apparent that we are far from having reached the bottom of what the Council sees as acceptable in treating our personal data as a commodity.
Probably the gravest change of the text is to allow the storing of tracking technologies on the individual’s computer without consent for websites that partly or wholly finance themselves through advertisement, provided they have informed the user of the existence and use of such processing and the user “has accepted this use” (Recital 21). The “acceptance” of such identifiers by the user as suggested is far from being the informed consent that the General Data Protection Regulation (GDPR) established as a standard in the EU. The Austrian Presidency text will put cookies which are necessary for a regular use (such as language preferences and contents of a shopping basket) on the same level as the very invasive tracking technologies which are being pushed by the Google/Facebook duopoly in the current commercial surveillance framework. This opens the Pandora’s box for more and more sharing, merging and reselling citizen’s data in huge online commercial surveillance networks, and micro-targeting them with commercial and political manipulation, without the knowledge of the person whose private information is being shared to a large number of unknown third parties.
One of the great added values of the ePrivacy Regulation (which was originally intended to enter into force at the same point in time as the GDPR) is that it’s supposed to raise the bar for companies and other actors who want to track citizens’ behaviour on the internet by placing tracking technologies on the users’ computers. Currently, such an accumulation of potentially highly sensitive data about an individual mostly happens without real knowledge of individuals, often through coerced (not freely given) consent, and the data is shared and resold extensively within opaque advertising networks and data-broker services. In a strong and future-proof ePrivacy Regulation, the collection and processing of such behavioural data thus needs to be tightly regulated and must be based on an informed consent of the individual – an approach that becomes now more and more jeopardised as the Council seems to become increasingly favourable to tracking technologies.
The detrimental change of Recital 21 is only one of the bad ideas through which the Austrian Presidency seeks to strike a consensus: In addition, there is for instance the undermining of the protection of “compatible further processing” (which is itself already a bad idea introduced by the Council) in Article 6 2aa (c), or the watering down of the requirements for regulatory authorities in Article 18, which causes significant friction with the GDPR. With one disappointing “compromise” after another, the ePrivacy Regulation becomes increasingly endangered of falling short on its ambition to end unwanted stalking of individuals on the internet.
EDRi will continue to observe the developments of the legislation closely and calls everyone in favour of a solid EU privacy regime that protects citizens’ rights and competition to voice their demands to their member states.