copyright

In the digital era, copyright should be implemented in a way which benefits creators and society. It should support cultural work and facilitate access to knowledge. Copyright should not be used to lock away cultural goods, damaging rather than benefitting access to our cultural heritage. Copyright should be a catalyst of creation and innovation. In the digital environment, citizens face disproportionate enforcement measures from states, arbitrary privatised enforcement measures from companies and a lack of innovative offers, all of which reinforces the impression of a failed and illegitimate legal framework that undermine the relationship between creators and the society they live in. Copyright needs to be fundamentally reformed to be fit for purpose, predictable for creators, flexible and credible.

27 Mar 2019

GDPR incompatibility – the blind spot of the copyright debate

By Chloé Berthélémy

The debate around the Copyright Directive reform has been intense. Former Article 13, which became Article 17 in the text voted by the European Parliament on 26 March, created the greatest controversy between stakeholders arguing about the so-called “value gap” in the creative sectors, upload filters, and a new platform liability regime, among others issues. However, few observers have analysed the impact of Article 13/17 on the General Data Protection Regulation (GDPR). On 23 March, Dr. Malte Engeler, a German judge, published an article explaining why the filtering technology required by the Copyright Directive might be incompatible with European data protection rules.

Article 13/17 requires content hosting providers to give their best efforts to prevent the upload or re-upload of copyright-protected works – which can only be achieved with upload filters – except if they are covered by specific copyright exceptions such as quotation, criticism or parody. For filters to function properly while taking into account those exceptions, they would need to recognise the context of the upload, that is to say information surrounding the content including personal data of the user uploading it. The question Engeler asks is under which legal basis of GDPR would platforms be able to process such personal data?

According to Engeler, platforms would be considered controllers in the sense of the GDPR because they decide which technologies they will use to monitor content. When analysing a film extract uploaded without authorisation, a filter would need to know whether it was used by a film critic – which would be legal according to the copyright exceptions listed in Article 13/17 – or by a user attempting to illegally distribute the film. Detecting such differences in the use of the same piece of content would depend on “meta information about the upload” such as the user identity, the place, and the date. This information would be considered personal data, and its analysis by the algorithm would be processing under GDPR.

The article goes on by examining the legal basis provided for in the GDPR (Article 6(1)), under which such processing would be allowed. Consent could not be freely given because all platforms would be required to have this processing in place, leaving no alternative to users. Making upload filters part of the terms and conditions would not respect the criteria of necessity of paragraph 1b, which allows the processing of personal data to execute a contract. Furthermore, the processing of personal data by content filters is neither necessary to protect the user’s vital interests, nor is it done for public or legitimate interests pursued by the platform – they don’t want an obligation to put filters in place. This leaves the platform with the legal basis whereby the processing is necessary for compliance with another legal obligation (para. 1c), which would be compliance with the copyright Directive.

However, considering the high risk of liability, smaller platforms will likely have to implement third party filters, bought as a service from bigger companies that have invested tens of millions of euros in such technologies. As a result, few big content filtering companies will be able to process the above-mentioned personal data of the vast majority of users. The new copyright Directive would thus lead to centralised filtering mechanisms.

This is problematic in regards to the principle of proportionality mentioned in the GDPR and in the Charter of Fundamental Rights of the European Union. Such a filtering system was already discarded by the Court of Justice of the European Union (CJEU) because it failed to strike a fair balance “between the right to intellectual property, on the one hand, and the freedom to conduct business, the right to protection of personal data and the freedom to receive or impart information on the other”. The legal obligation that Article 13/17 creates for platforms is incompatible with the right to protection of personal data, which makes it hard to rely on for the processing of personal data under the GDPR.

Copyright Directive: Does the best effort principle comply with GDPR? (23.03.2019)
https://www.telemedicus.info/article/3402-Copyright-Directive-Does-the-best-effort-principle-comply-with-GDPR.html

Press Release: Censorship machine takes over EU’s internet (26.03.2019)
https://edri.org/censorship-machine-takes-over-eu-internet/

SABAM vs Netlog – another important ruling for fundamental rights (16.02.2012)
https://edri.org/sabam_netlog_win/

All you need to know about copyright and EDRi (15.03.2019)
https://edri.org/all-you-need-to-know-about-copyright-and-edri/

(Contribution by Chloé Berthélémy, EDRi)

close
26 Mar 2019

Press Release: Censorship machine takes over EU’s internet

By EDRi

Today, on 26 March, the European Parliament voted in favour of adopting controversial upload filters (Article 13/17) as part of the copyright Directive. This vote comes after what was an intense campaign for human rights activists, with millions of signatures, calls, tweets and emails from concerned individuals, as well as Europe-wide protests.

Despite the mobilisation, 348 Members of the European Parliament (MEPs) gave their support to the proposed text which includes concerning restriction to freedom of expression. Noticeably, 274 stood up with citizens and voted to reject upload filters. The proposal to open the text for amendments was rejected by five votes difference. The amendments proposing the deletion of Article 13 were not even subject to a vote.

Article 13 of the copyright Directive contains a change of internet hosting services’ responsibility that will necessarily lead to the implementation of upload filters on a vast number of internet platforms. With dangerous potential for automatised censorship mechanisms, online content filtering could be the end of the internet as we know it.

Disappointingly, the newly adopted Directive does not benefit small independent authors, but instead, it empowers tech giants. More alarmingly, Article 13 of the Directive sets a dangerous precedent for internet filters and automatised censorship mechanisms – in the EU and across the globe.


said Diego Naranjo, Senior Policy Advisor at EDRi

European Digital Rights (EDRi) has long advocated for a copyright reform that would update the current EU copyright regime to be fit for the digital era, and make sure artists receive remuneration for their work and creativity. This Directive delivers none of those.


EU Member States will now have to transpose the Directive into their national laws and decide how strictly they will implement upload filters. People need to pay special attention to the national-level implementation of the Directive in order to ensure that the voted text does not enable censorship tools that restrict our fundamental rights.

Ahead of the next European Parliament elections, this vote comes as another important reminder of the impact that EU law-making can have on human rights online and offline. EDRi ensures the voice of civil society is represented in the EU democratic process and would like to thank all those involved in the battle against upload filters for their inspiring dedication towards the defence of fundamental rights and freedoms.

Copyright reform: Document pool
https://edri.org/copyright-reform-document-pool/

All you need to know about copyright and EDRi (15.03.2019)
https://edri.org/all-you-need-to-know-about-copyright-and-edri/

Twitter_tweet_and_follow_banner close
21 Mar 2019

Join the ultimate Action Week against Article 13

By Andreea Belu

The final vote on the Copyright Directive in the European Parliament plenary will take place on 26 March. A key piece raising concerns in the proposal is Article 13. It contains a change of platforms’ responsibility that will imminently lead to the implementation of upload filters on a vast number of internet platforms. The proposed text of Article 13 on which the Parliament will be voting is the worst we have seen so far.

Public outcry around Article 13 reached a historical peak with almost five million individuals signing a petition against it, and thousands calling, tweeting and emailing their Members of the European Parliament (MEPs). Despite the scale of the protests, legislators fail to address the problems and remove upload filters from the proposal.

Join the Action Week (20 March – 27 March) organised by the free internet community and spread the word about the #SaveYourInternet movement! Send Members of the European Parliament a strong message: “Side with citizens and say NO to upload filters!

NOW – Get active!

Kickstart the action week! Did you get your MEP to pledge opposition to the “Censorship Machine” during the plenary vote ? Did you reach out to a national news outlet to explain them why this is bad for the EU? Did you tell your best mate your meme game may be about to end? If you answered “No” to any of those questions… NOW IS THE TIME TO ACT.

21 March – Internet blackout day

Several websites are planning to shut down on this day. Wikimedia Germany is one of them. Is your website potentially hosting copyrighted content, and therefore affected by the upcoming copyright upload filter? Join the protest!
#Blackout21

23 March – Protests all over Europe

Thousands have marched the streets in the past weeks. The protests were not lastly influenced by European Commission’s allegations of the #SaveYourInternet movement as a bots-driven one, purposely misleading communication from the EU Parliament, and the attempted rushing of the final vote weeks before originally scheduled. 23 March will be the general protest day – see a map here. Commit to EU’s core democratic values and show what positive citizens’ engagement looks like!
#Article13Demo #Artikel13Demo

19 to 27 March – Activists travel to meet their MEPs

We have launched a travel grant for activists willing to travel to Strasbourg and Brussels in order to discuss with their representatives. Do you want to take part in our final effort to get rid of mandatory upload filters? Join us! The deadline to apply is Friday 15 March.
#SYIOnTour

It is very important that we connect with our MEPs and make our concerns heard every day of the Action Week. Whether you can travel or make phone calls to get in touch with your representatives, or grow awareness in your local community – it all makes a huge difference. Build on the voices of internet luminaries, the UN Special Rapporteur on Freedom of Expression, civil society organisations, programmers, and academics who spoke against Article 13!

We need the stop the censorship machine and work together in order to create a better European Union! You can count on us! Can we count on you?

Read more

Save Your Internet Campaign website
https://saveyourinternet.eu/

Pledge 2019 Campaign Website
https://pledge2019.eu/en

Upload Filters: history and next steps (20.02.2019)
https://edri.org/upload-filters-status-of-the-copyright-discussions-and-next-steps

Twitter_tweet_and_follow_banner
close
18 Mar 2019

Open letter: Regulation on terrorist content online endangers freedom of expression

By EDRi

On 18 March 2019, together with seven other organisations, EDRi sent a letter to Members of the European Parliament (MEPs), to share our concerns with regards to the draft Regulation on preventing the dissemination of terrorist online content.

The European Parliament Committee in Civil Liberties, Justice and Home Affairs (LIBE) is set to vote on its Report on the draft Regulation on 21 March. If the original Commission proposal is not seriously re-drafted, it could have major impacts on civil liberties online.

You can read the letter here (pdf), and below:

Brussels, 18 March 2019

Dear Members of the European Parliament,

We, the undersigned organisations, would like to express some of our views on the draft Regulation on preventing the dissemination of terrorist content online published in September 2018, ahead of a key vote in the Civil Liberties Committee.

We believe that illegal terrorist content is unequivocally unacceptable offline and online. While we understand the aim of the draft Regulation, we regret that the approach taken by the European Commission and the Council of the European Union did not address the most pressing concerns we share on this text, such as the wide definitions of terrorist content and of hosting service providers falling within the scope of the Regulation, the introduction of unworkable deadlines for content removal and mandatory “proactive measures”. These requirements could necessitate the introduction of upload filters and therefore potentially lead to removal of legal content. Far from helping private and public actors curb the dissemination of terrorist propaganda online, this draft Regulation risks undermining current efforts and could have a strong impact on European citizens’ fundamental rights.

Similar concerns on the provisions of this draft Regulation have been expressed by international institutions, including the EU Fundamental Rights Agency (FRA), the three UN Special Rapporteurs in a joint opinion and the European Data Protection Supervisor (EDPS).

We therefore urge the Civil Liberties Committee to take a proportionate approach compliant with the EU Charter of Fundamental Rights and the EU acquis, by:

  • Ensuring that the definition of terrorist content is aligned with the Terrorism Directive, and that the dissemination of such content is directly linked to the intent of committing terrorist offences.
  • Narrowing the definition of terrorist groups to cover only those terrorist groups listed by the United Nations and the European Union.
  • Limiting the definition of hosting services to services where a proven risk of propagation of terrorist content to the general public exists i.e. the scope should exclude services such as Cloud Infrastructure, Internet Infrastructure and Electronic Communication Services.
  • Amending the extremely short one-hour deadline to comply with removal orders; which would lead to over-removal of legal content online and is unworkable for many enterprises.
  • Ensuring that referrals are deleted from the proposal or substantially modified so they do not lead to private companies bearing the burden of deciding the legality of content instead of the judicial authorities in Member States.
  • Clearly aligning the proposal with the e-Commerce Directive, ensuring that any additional measures as drafted in Article 6 are not “proactive measures” which consist, directly or indirectly, of implementing mandatory filtering mechanisms thus inadvertently introducing a general monitoring obligation.
  • Ensuring that removal orders follow robust and accountable procedures and are issued by a single independent competent authority per Member State.
  • Including adaptable provisions for different types of companies and organisations.

Sincerely,
Access Now –
https://www.accessnow.org/
Allied for Startups –
https://alliedforstartups.org/
Computer & Communications Industry Association (CCIA) –
https://www.cccianet.org
Center for Democracy and Technology (CDT) –
https://cdt.org/
CISPE.cloud, representing Cloud Infrastructure Service Providers in Europe –
https://cispe.cloud/
EDiMA –
http://edima-eu.org
EDRi – edri.org/
EuroISPA, the pan-European association of Internet Services Providers Associations –
https://www.euroispa.org
Free Knowledge Advocacy Group EU –
https://wikimediafoundation.org/

Open letter to the European Parliament on terrorist content online (18.03.2019)
https://edri.org/files/counterterrorism/20190318-TerroristContentRegOpenLetter.pdf

Terrorist Content Regulation: Document Pool
https://edri.org/terrorist-content-regulation-document-pool/

Twitter_tweet_and_follow_banner close
15 Mar 2019

All you need to know about copyright and EDRi

By EDRi

The last vote on the Copyright Directive’s final text is set to take place on 26 March. Ahead of this crucial vote in the European Parliament plenary, here is some background on EDRi’s priorities around this topic.

EDRi’s position on copyright

European Digital Rights has long advocated for a copyright reform, proposing an update of the current EU copyright regime in line with the digital era. With copyright as one of the main objectives in our current work plan, we have promoted a positive agenda aimed at fixing the main problems within the existing framework. EDRi supports the idea that authors and artists receive recognition, remuneration and support for their work and creativity. However, we believe the current proposal falls short of these expectations. Instead, it introduces problematic measures that would restrict freedom of expression and reduce access to knowledge.

Copyright and EDRi – More than a summer’s love

EDRi has been involved in copyright discussions since it was founded. In 2013, we released an important handbook that explained the foundations of the profound disconnect that has developed between citizens and the law: “Copyright – challenges in the digital era”. We also provided responses to several EU public consultations on copyright (such as this one in 2014). More, in 2016, EDRi released a series of blogposts called Copyfails. This Copyfails blog series pointed at nine crucial issues that could, if solved, lead to a modernised copyright regime that takes into consideration the needs of all parts of society. Fair remuneration for authors, for example, has been one of our key demands in the copyright reform. In addition, EDRi has been and continues to be an active stakeholder in the Observatory on IP infringements of the European Union Intellectual Property Office (EUIPO).

Civic engagement – helping people to get their voice heard

In addition to providing extensive expert input to policy makers and the general public though our publications, EDRi often meets with decision-makers at EU and national levels and participates in public events (roundtables, conference panels, debates etc.). Whenever possible, we try to amplify people’s voice by providing financial support needed for travels aimed at enhancing civic engagement. Making our community meet policy makers is often the most effective method to ensure that civil society voices are heard in the debate.

One of this financial support initiatives was launched on 7 March: a travel grant for people willing to travel and meet their elected representatives (Members of the European Parliament, MEPs) for a discussion around the current proposal of Article 13. By providing these grants, part of a wider Action Week, we aim at balancing the current debate with a civil society’s perspective. As shown by Corporate Europe’s report of the lobby money involved in this dossier, the debate has been overwhelmingly dominated by business lobby groups.

Transparency and accountability – a core EDRi value

EDRi’s work on copyright (including staff costs) is supported through our general budget. We are often also fundraising money that are project–specific, as is the case with the travel grants announced last week. Right now, we are in the process of receiving additional support for this action. Two thirds of the funds come from the Open Society Foundations, one of our core funders for the past years. The other third of the budget is covered by the annual budget of the Copyright for Creativity (C4C) Coalition, of which EDRi is a member. The funding for all travel grants, including the one launched on 7 March, involves no obligations on EDRi’s side from our funders. In other words, EDRi independently decides who benefits of the budget and what the action’s promoted policies are.

Let’s stick to the subject

The substance of our debate is the analysis of the current version of Article 13 in the Copyright Directive reform. It is unfortunate that much of the support of Article 13 currently revolves around criticism of opponents and not the opponents’ arguments.

Therefore, we invite you to place the actual subject, Article 13, at the center of your attention and to ask yourself: Why would any member of the European Parliament support a copyright reform that harms small and medium enterprises, doesn’t benefit small independent authors, instead empowers tech giants and sets the dangerous precedent for internet filters?

Read more:

Support our work
https://edri.org/donate

About EDRi
edri.org/about/ (opens in a new tab)” href=”http:// edri.org/about/ ” target=”_blank”>edri.org/about/

Copyright – challenges of the digital era
https://edri.org/wp-content/uploads/2013/10/paper07_web_20130202.pdf

Copyfails: Time to #fixcopyright!
https://edri.org/copyfails/

Copyright reform: Document pool
https://edri.org/copyright-reform-document-pool/

Twitter_tweet_and_follow_banner
close
13 Mar 2019

What will happen to our memes?

By Bits of Freedom

In Europe, new rules concerning copyright are being created that could change the internet fundamentally. The consequences that the upload filters included in the EU copyright Directive proposal will have for our creativity online raise concerns. Will everything we want to post to the internet have to pass through “censorship machines”? If the proposed Directive is adopted and implemented, what will happen to your memes, for example?

The proposal that will shortly be voted on by the European Parliament contain new rules regarding copyright enforcement. Websites would have to check every upload that is made by their users for possible breaches of copyright, and must block this content when in doubt. Even though memes are often extracted from a movie, well-known photo or video clip, advocates of the legislation repeat time and again that this doesn’t mean memes will disappear − they reason that exceptions will be made for that. In practice, however, such an exception does not seem workable and impairs the speed and thus the essence of memes. It will be impossible for an automated filter to capture the memes’ context.

Step 1: You upload a meme

Imagine that you’re watching a series and you see an image that you would like to share with your friends − it could be something funny or recognisable to a large group of people. Or that you use an existing meme to illustrate a post on social media. Maybe you adjust the meme with the names of your friends or the topic that concerns you at that moment. Then you upload it on Youtube, Twitter or another online platform.

Step 2: Your upload is being filtered

If the new Directive – as currently proposed – is implemented, the platform will be obliged to avoid any copyrighted material from appearing online. In order to abide the legislation, they will install automated filters that compare all material imported into the platform with all the copyrighted material. In case there is a match, the upload will subsequently be blocked. This will also be the case with the meme you intended to share online, because it originates from the television series, video clip or movie. You get the message: “Sorry, we are not allowed to publish this.”

Step 3: It’s your turn

What!? What about the exception that was supposed to be there for memes? Of course the exception is still there, but in practice it’s impossible to train filters to know the context of every image. How does a filter know what is a meme and what isn’t? How do these filters keep learning about new memes that appear every day? There are already many examples of filters that fail. Hence, you’ll need to get to work. Just like you can appeal against the online platforms’ decision when it has wrongfully blocked a picture for depicting “nudity” or “violence”, you will be able to appeal when your meme couldn’t pass the filter. That probably means that you’ll need to fill in a form in which you explain that it’s just a meme and explain why you think it should be allowed to be uploaded.

Step 4: Patience, please

After the form is filled in and you click “send”, all you can do is wait. Just like already is the case with filters of Youtube and Facebook: the incorrectly filtered posts need to be checked by real human beings, people that can assess the context and hopefully come to the conclusion that your image really is a meme. But that process can take a while… It’s a pity, because your meme was responding perfectly to current events. Swiftness, creativity and familiarity are three key elements of a meme. With upload filters, to keep the familiarity, you lose the swiftness.

Step 5: Your meme will still be posted online − or not?

At a certain moment in time, you receive a message. Either your upload has been finally accepted, or there still might be enough reasons to refuse it from being uploaded. And then what? Will you try again at another platform? That might take some days as well. The fun and power of memes is often the speed in which someone responds to a proposal of a politician, or an answer in a game show. Therefore you shouldn’t let Article 13 destroy your creativity!

#SaveYourInternet as we know it! Call a Member of the European Parlement (for free) through pledge2019.eu!

Bits of Freedom
https://www.bitsoffreedom.nl/

What will happen to our memes? (11.03.2019) https://www.bitsoffreedom.nl/2019/03/11/what-will-happen-to-our-memes/

What will happen to our memes? (only in Dutch, 11.03.2019) https://www.bitsoffreedom.nl/2019/03/04/wat-gebeurt-er-straks-met-onze-memes/

Pledge2019.eu
https://pledge2019.eu/en

Save Your Internet
https://saveyourinternet.eu/

(Contribution by Esther Crabbendam, EDRi member Bits of Freedom, the Netherlands; translation by Winnie van Nunen)

EDRi-gram_subscribe_banner
Twitter_tweet_and_follow_banner
close
13 Mar 2019

FSFE: Publicly funded software has to result in public code

By Free Software Foundation Europe - FSFE

As the European Parliament elections approach, EDRi member Free Software Foundation Europe (FSFE) intensifies the efforts for the “Public Money? Public Code!” campaign. In January 2019, FSFE published a new brochure to serve as guidelines for decision-makers, explaining the fundamental benefits of public code.

Free Software for a Free Society

Free and Open Source Software (FOSS) is a simple but powerful idea. The four freedoms that users have when interacting with software − use, study, share and improve − empower other fundamental liberties, such as freedom of speech, freedom of press and the right to privacy. In fact, the digital sovereignty of public and private actors depends on software freedom.

edri.org/wp-content/uploads/2016/05/EDRi-gram_donate_banner.png” alt=”—————————————————————– Support our work with a one-off-donation! edri.org/donate/ —————————————————————–” class=”wp-image-8690″/>

Public administrations are important users and providers of software. They procure, fund and support the development of products and services that can affect large groups of people. However, when these endeavours do not involve Free Software, critical questions concerning security, efficiency, distribution of power, and transparency arise.

That is why FSFE informs decision makers such as politicians and civil servants on how to speed up the distribution and development of Free Software in public administration, as well as demanding appropriate legislation to insure that publicly funded software must become and remain public code.

Published in January 2019, the brochure “Public Money Public Code – Modernising Public Infrastructure with Free Software” compiles detailed and ready-to-use information about the multiple actions that can be implemented by public administrations in terms of modernising public digital infrastructure. Topics such as competition and vendor lock-in, security, procurement and international cooperation are discussed in a language the target audience understands. The publication combines the FSFE’s long-term experience with knowledge by leading experts in various areas of information and communications technology.

Public Money? Public Code!

There are many good incentives and reasons for decision makers to put publicly funded code under a Free Software licence: tax savings, transparency, and innovation – just to name a few. The FSFE’s campaign “Public Money? Public Code!” demands that publicly financed software must be available under a FOSS license.

The campaign includes an open letter to political representatives that is supported by more than 160 organisations. It has already more than 20 000 signatures, and it is still open for new supporters. FSFE is encouraging people to join activities around the campaigning for the upcoming EU elections, spreading the knowledge, and highlighting the fundamental topics to their Members of the European Parliament (MEPs).

Free Software Foundation Europe – FSFE
https://fsfe.org/

Public Money Public Code – Modernising Public Infrastructure with Free Software
https://fsfe.org/campaigns/publiccode/brochure

Press release: FSFE publishes expert brochure about “Public Money? Public Code!” (24.01.2019)
https://fsfe.org/news/2019/news-20190124-01.html

“Public Money? Public Code!” campaign
https://publiccode.eu/

“Public Money? Public Code!” open letter
https://publiccode.eu/openletter/

(Contribution by Free Software Foundation Europe – FSFE)

EDRi-gram_subscribe_banner

Twitter_tweet_and_follow_banner

close
13 Mar 2019

The art of dodging questions – Facebook’s privacy policies

By Chloé Berthélémy

Remember in April 2018, after the Cambridge Analytica scandal broke, we sent a series of 13 questions to Facebook about their users’ data exploitation policy. Months later, Facebook got back to us with answers. Here is a critical analysis of their response.

Recognising people’s face without biometric data?

The first questions (1a and 1b) related to Facebook’s new facial recognition feature which scans every image uploaded to search for faces and compare them to those already in their database in order to identify users. Facebook claims that the identification process only works for users that explicitly consented to have the feature enabled and that the initial detection stage, during which the photograph is being analysed, does not involve the processing of biometric data. Biometric data is data used to identify a person through unique characteristics like fingerprints or facial features.

There are two issues here. First, contrary to what Facebook declared, the first batch of users for whom face recognition was activated received a notice, but were not asked for consent. All users were opted in by default, and only a visit to the settings page allowed them to say “no”. For the second batch of users, Facebook apparently decided to automatically opt-in only those accounts that had the photo tag suggestion feature activated, simply assuming that they wanted face recognition, too. Obviously, this does not constitute explicit consent under the General Data Protection Regulation (GDPR).

Second, even if Facebook does not manage to identify users who disabled the feature or people who are not users, their photos might still be uploaded and their faces scanned. No technology can determine whether an image contains only users who gave consent, without actually scanning every uploaded photo to search for facial features.

Facebook has been presenting this new feature as an empowerment tool for users to control which pictures of them are being uploaded on the platform, to protect privacy and to prevent identity theft. However, EU officials and digital rights advocates denounced this communication practice as manipulating user consent by promoting facial recognition as an identity protection tool.

Privacy settings by default

One of our questions related to the initial settings every Facebook user has when creating an account and their protection levels by default (question 3). Facebook responded that it has suspended the search for people by phone number in the Facebook search bar. Since Facebook responded to our questions in August 2018, it seems that it reinstated this function, set on “Everyone can look you up using your phone number” by default (see below Belgian account settings consulted lastly on 24 January 2019).

This reinstatement is probably linked to the upcoming merging between Facebook-owned messaging systems: Facebook Messenger, WhatsApp and Instagram messaging. Identification requirements for each messaging applications are different: a Facebook account for Messenger, a phone number for WhatsApp and an email for Instagram. The merging gives Facebook the possibility to intersect information and to connect several profiles under a single, unified identity. What is worse, Facebook now reportedly makes searchable phone numbers that users had provided for two-factor authentication, and there is no way to switch this feature off.

Other default privacy settings on Facebook are not protective either. The access to a user’s friend list is set to “publicly visible”, for example. Facebook justified the low privacy level by repeating that users join Facebook to connect with others. Nonetheless, even if users want to limit who can see their friend lists, people can see their Facebook friendships by looking at the publicly accessible friends lists of their friends. Some personal information will simply never be fully private under Facebook’s current privacy policies.

The Cambridge Analytica case

Facebook pleaded the misuse of its services and shifted the entire responsibility of the Cambridge Analytica scandal on the quiz application “This Is Your Digital Life” (our questions 4 and 5). The app requested permission from users to access their personal messages and newsfeed. According to Facebook, there was no unauthorised access to data as the consent was freely given by users. However, accessing one user’s newsfeed and personal messages also meant that the application could access received posts and messages, that is to say from users who did not consent. Once again, individual privacy is highly dependent on others’ carefulness. Facebook admitted that it wished it had notified earlier affected users who did not give consent. To our question why the appropriate national authorities were not notified of the incident immediately, Facebook gave no answer.

“This Is Your Digital Life” is just one application, but there may be many more that harvest similar amounts of personal data without the consent from users. Facebook assured that it made it harder for third parties to misuse its systems. Nevertheless, the limits to the processing of collected data by third parties remain unclear, and we received no answer about the current possibilities for other applications to share and receive users messages.

Facebook’s ad targeting practices

“Advertising is central not only to our ability to operate Facebook, but to the core service that we provide, so we do not offer the ability to disable advertising altogether.” If advertisement is non-negotiable (our question 9), Facebook explained that through its new Ad Preferences tool (our question 6) users can nevertheless decide whether or not they want to see ads that are targeted at them based on their interests and personal data. The Ad Preferences tool gives users control over the criteria used for targeted advertisement: data provided by the user, data collected from Facebook partners, and data based on the user’s activity on Facebook products. Users can also hide advertisement topics and disable advertisers with whom they interacted.

But if Facebook was treating ads settings the same way as privacy settings, as it claims to do, the default settings for a new user would look very different: For this article we created a new Facebook account and found that Facebook does not guide new users through the opt-in and opt-out options for privacy and ad settings. On the contrary, Facebook’s default ad settings involve the profiling of new users based on their relationship status, job title, employer and education (see new account settings below). Those defaults are clearly incompatible with the GDPR’s “privacy by default” requirement.

Ads are also based on the activity on Facebook products, present on “websites, apps and devices that use [Facebook’s] advertising services”. This includes everything from social media plugins such as “Like” or “Share” buttons to Facebook Messenger, Instagram or even Whatsapp, which has stand-alone terms of service and privacy policy. If a third party website uses Facebook Analytics, traces left by the user on that third-party website will be used as well. Since Facebook is acquiring more and more applications, the list goes on and on. “Data from different apps can paint a fine-grained and intimate picture of people’s activities, interests, behaviours and routines, some of which can reveal special category data, including information about people’s health or religion.”

In the same vein, EDRi member Privacy International found that Facebook collects personal information on people who are logged out of Facebook or don’t even have a Facebook account. The social media company owns so many apps, “business tools” and services that it is capable of tracking users, non-users and logged-out users across the internet. Facebook doesn’t seem to be willing to change its business practices to respect people’s privacy. Privacy is not about what Facebook users can see from each other but what information is accessed and used by third parties and for which purposes without the users’ knowledge or consent.

Profiling and automated decision-making

Article 22 of the GDPR introduces a right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or “similarly significant” effects for the user. We asked Facebook what measures it takes to make sure its ad targeting practices, notably for political ads, are compliant with this provision (question 7). In its answer, Facebook considers that its targeted ads based on automated decision-making do not have legal or similarly significant effects yet. In light of the numerous scandals the company has been facing around the manipulation of the 2016 U.S. elections and the Brexit referendum, this answer is quite surprising. Even though many would argue that the way Facebook targets voters with ads based on automated decision-making has indeed “similarly significant”, if not legal effects for its users and societies as a whole. But Unfortunately, Facebook doesn’t seem to consider it should change its ad targeting practices.

Special categories of data

Article 9 of the GDPR defines special categories of particularly sensitive data that include racial or ethnic origin, political opinions, religious beliefs, health, sexual orientation and biometric data. Facebook says that without the user’s explicit consent to use such special categories of data, they will be deleted from respective profiles and Facebook’s servers (our question 2.a).

What Facebook doesn’t say, is that users don’t even need to share this information in order for the platform to monetise it. Facebook can simply deduce religious views, political opinions and health data from based on which third-party websites they visit, what they write in Facebook posts, what they comment on and share: Facebook does not need users to fill in their profile fields when it can infer extremely sensitive information from all other data users generate on the platform day in day out. Facebook can then assign different ad preferences (such as “LGBT community”, “Socialist Party”, “Eastern Orthodox Church”) based on each user’s online activities, without asking for consent at all, and exploit it for advertising purposes. Researchers argue the practice of labelling Facebook users with ad preferences associated with special categories of personal data may be in breach of Article 9 of the GDPR because no other legal basis than explicit consent could allow this form of use. In its reply to our questions, Facebook voluntarily omitted their use of sensitive data derived from user behaviour, posts, comments, likes and so on to feed its marketing profiles. It is too easy to focus on the tip of the iceberg.

Right to access

Replying to our request on the right to access, download, erase or modify personal data, Facebook described its three main tools, Download Your Information (DYI), Access Your Data (AYD) and Clear History (our question 8). According to Facebook, DYI provides the user with all the data each user provided on the platform. But as explained above, this does not include information inferred by the platform based on user behaviour, posts, comments, likes and so on, nor information provided by friends or other users, such as tags in photos or posts.

Lastly, Facebook confirmed that it was not using smartphone microphones to inform ads (our question 12). This might even be true, because Facebook has already a lot of surveillance tools at hand to gather enough information about users to produce disconcerting advertisements.

Questions left without answers

  1. What was the cut-off date before Facebook started deleting information users added to their profile and did not give explicit consent for their processing?
  2. Will Facebook offer a single place where people who have no Facebook account can control every privacy aspect of Facebook?
  3. If Facebook apps were to use smartphone microphones in any way, would you consider that lawful?
  4. You claim to offer a way for users to download their data with one click. Can you confirm that the downloaded files contain all the data that Facebook holds on each user?

Written Responses to EDRi Questions (22.06.2018)
https://edri.org/files/edri_responses_facebook_20180622.pdf

Privacy International’s study on ‘How Apps on Android Share Data with Facebook – Report’ (29.12.2018) https://privacyinternational.org/report/2647/how-apps-android-share-data-facebook-report

Facebook Use of Sensitive Data for Advertising in Europe
https://arxiv.org/pdf/1802.05030.pdf

Facebook Doesn’t Need To Listen Through Your Microphone To Serve You Creepy Ads (13.04.2018)
https://www.eff.org/fr/deeplinks/2018/04/facebook-doesnt-need-listen-through-your-microphone-serve-you-creepy-ads

(Contribution by Chloé Berthélémy, EDRi)

EDRi-gram_subscribe_banner
Twitter_tweet_and_follow_banner
close
13 Mar 2019

Record number of calls to the EU Parliament against upload filters

By Epicenter.works

With just two weeks to go until the final vote on upload filters in the European Parliament, one hundred MEPs have pledged to vote against Article 13 of the proposed Copyright Directive. Many citizens feel like their legitimate fears about the future of the internet are not taken seriously as lawmakers insult them as being “bots” or simply “a mob”. Public protests demanding the removal of Article 13 have been announced in 23 European cities.

Thousands of EU citizens have taken part in the pledge2019.eu campaign, picking up their phone and calling their representatives. Since the campaign was launched at the end of February, citizens have called their elected representatives more than 1200 times, and spent over 72 hours on the phone with them. This unprecedented number of phone calls to politicians demonstrates just how much people care about an open, uncensored internet. It also shows that citizens are interested in engaging in European political issues, if you let them. After previous attempts of citizens to reach out to policy makers via social media or e-mail have been discredited as originating from bots or being part of a mob, citizens are now going the extra mile and voice their concern directly to their elected representative.

The damage done to Europe’s democracy by claiming that citizens voicing their concerns are a manufactured campaign is immense. A whole generation of internet users learn that their legitimate fears about the consequences of the proposal on modern everyday cultural expression and media habits are being ignored and ridiculed. In reaction, the protest movement against Article 13 gave itself the slogan “We are no bots”.

Upload filters are quickly becoming a major issue in the upcoming EU elections as demonstrators are joined by a host of experts against Article 13: UN Special Rapporteur on freedom of expression David Kaye warns against the threat for our freedom of expression online, academics specialising in intellectual property law call the proposal “misguided”, the founder of the world wide web Sir Tim Berners-Lee together with other internet emincence warns about the imminent threat to the open internet, and the International Federation of Journalists calls on policy makers to rethink this unbalanced copyright Directive.

On 23 March 2019 several rallies and demonstrations will be organised against Article 13 all around Europe. Concerned citizens still have two weeks left to visit www.pledge2019.eu and make their voices heard.

Epicenter.works
https://epicenter.works/

Save Your Internet
https://saveyourinternet.eu/

Pledge2019.eu
https://pledge2019.eu/en

Save Your Internet – Call for a Pan-European Day of Protests on 23 March!
https://savetheinternet.info/demos

(Contribution by Thomas Lohninger, EDRi member Epicenter.works, Austria)

EDRi-gram_subscribe_banner
Twitter_tweet_and_follow_banner
close
08 Mar 2019

Women’s rights online: tips for a safer digital life

By Chloé Berthélémy

The internet is an incredible tool and has empowered women to speak up, react and organise to face patriarchy and oppression. But the internet is not a neutral place – sexist, racist, homophobic and other violent types of behaviour and content are disproportionately affecting women. This International Women’s Day, we would like to celebrate positive stories and provide practical tips, accessible tools and material for women’s digital safety, security and privacy.

This article covers:

  1. Browsing safely and anonymously
  2. Securing accounts and communications
  3. Gaming safely
  4. Facing and recovering from online harassment
  5. More resources

Women are more likely to be subject to online harassment and violence, massive campaigns of abuse and intimidation, or exploitation and manipulation of private data. An Amnesty International report found that women of colour, women with disabilities, lesbian, bisexual, trans women and women at the intersection of forms of oppression are even more targeted. Factors are manifold: little accountability of malicious attackers leading to a feeling of impunity, or the lack of knowledge of companies and developers about violence and abuse on their infrastructures. Victims are left with little support for the violence they’ve encountered. This leads women to self-censor, restrict their freedom of expression and their meaningful participation online.

Browsing safely and anonymously

When browsing the web, personal data and internet activity are being collected and recorded. Websites collect data such as demographics, intimate interests and tastes, personal habits and hobbies. This enormous amount of personal data includes sensitive information like credit card data, physical location, sexual preferences, religion, health and others. This information is extremely valuable to companies, governments and malicious actors alike and can be exploited and facilitate targeted attacks on women. One part of the solution is to use encryption. Using encryption is not as hard as it seems: Start with HTTPS Everywhere, a browser add-on that tells websites you visit to use encryption when available (a browser add-on is a small programme that customises your browser’s behaviour).

The infamous cookies are small pieces of data stored by websites on your devices and originally designed to remember your previous choices on a website such as form fields, shopping card items and language choice. Today, they are often used by third parties to assign you a unique identifying number which helps advertisement companies to follow you around across the web. While you probably want to allow some of the useful cookies on shopping portals and other websites, it’s definitely a good idea to block all third party cookies. This can be done directly in your browser settings.

Other forms of snooping include website trackers which are mostly used by advertisement companies. Trackers are little snippets of computer code often invisibly embedded in advertisement on all kinds of websites including your favourite newspaper, shopping site and social network. Trackers are often served by a third-party such as Google or Facebook rather than by the original owner of a website. You know those “Like” buttons you find all over the web? That’s actually a tracker telling Facebook which sites you’ve visited and which newspaper articles you’ve read. Luckily, two simple browser add-ons will help you block undesired trackers: Install Privacy Badger and Ublock Origin and you’re good to go.

Alternatively, in order to increase anonymity, you can use the Tor network or a Virtual Private Network. Those tools are particularly tailored and recommended for politically active women, human rights defenders or even women fearing for their safety. More information can be found here and here.

For women especially, the collection of data for commercial purposes can be very intrusive. Many doubts have been cast on menstruapps, which are very popular health-related mobile applications helping women to monitor their menstrual cycles. Not only do these apps know about the time period, but also invite users to share very intimate details about their periods like symptoms or sexual drive. Menstruation, pregnancy, online dating and many more aspects of women’s lives are turned into marketing targets. Another advice: never blindly trust mobile apps.

Lastly, it is important to note that websites often request too much information about users in order for us to be allowed to use of the service. More than just an email address and a password, websites may require a name, a location, and other unnecessary details. A good rule to follow is to only give personal information that is absolutely necessary – an email address to receive a registration confirmation or to retrieve a password for example. The rest is up to one’s imagination and creativity: fake address, fake birth date, etc. Faking means lowering the risk of having personal information possibly compromised.

Securing accounts and communications

Staying safe online also means protecting your communications and accounts against identity theft and hacking. When it comes to securing personal accounts, strong passwords are key. Here are the latest rules to create super strong passwords. Don’t use the same password across websites and services, and if you have more passwords than you can remember, use a password manager that keeps them all in one secure place for you. Another good practice to reduce the risk of hacking is to activate two-factors authentication when it is available: after entering a password, you will receive a second code on a different device or service.

As for browsing, encryption is good practice for communication, too, in order to avoid data mining by marketers and surveillance agencies. Pretty Good Privacy (PGP) for emails and messaging apps like Signal offer end-to-end encryption and are good starting points.

Intimate communications such as explicit pictures are particularly vulnerable content that can be used for all kinds of harassment practices such as “doxxing” (blackmail) or “revenge porn”. Specific advice on how to do sexting safely can be found here.

Gaming safely

When it comes to gaming, and especially multiplayer games, the experience for women can be less than enjoyable. In order to stay safe from harassment or sexism, there are a couple of things that you can put in place: You can make use of games’ reporting systems, mute an individual player in the chat function, don’t use your real name but instead register with a pseudonym that does not hint to your gender, don’t use a gamertag that you already use in other social media profiles, don’t use a real photo of yourself for your profile, and don’t give away any personal information in chats, such as your phone number or location.

Facing and recovering from online harassment

Women – and in particular women of colour, women with disabilities and lesbian, bisexual or trans women – represent the majority of harassment and violence targets. As a consequence, many women’s experience on social media leads them to self-censor what they post, and sometimes even delete their account. If you’re experiencing harassment on social media platforms such as Twitter, there are possibilities to cope with the situation and fight back. For example, victims can ask platforms to delete, suspend or send a warning to harassing accounts. HeartMob is a supportive tool where people can document the harassment they are experiencing on social media and request the support they need from an online community.

For women who are human rights defenders or political activists, taking action on this issue may include developing fully-fledged security and protection strategies for human rights defenders. Threats, incitement to rape or any form of violence is illegal and can be notified to law-enforcement authorities. Victims-support NGOs and services can assist you.

More resources

Twitter_tweet_and_follow_banner



close