copyright

In the digital era, copyright should be implemented in a way which benefits creators and society. It should support cultural work and facilitate access to knowledge. Copyright should not be used to lock away cultural goods, damaging rather than benefitting access to our cultural heritage. Copyright should be a catalyst of creation and innovation. In the digital environment, citizens face disproportionate enforcement measures from states, arbitrary privatised enforcement measures from companies and a lack of innovative offers, all of which reinforces the impression of a failed and illegitimate legal framework that undermine the relationship between creators and the society they live in. Copyright needs to be fundamentally reformed to be fit for purpose, predictable for creators, flexible and credible.

20 Nov 2019

A privately managed public space?

By Heini Järvinen
  • Our “public spaces” online where we meet each other, organise, or speak about social issues, are often controlled and dominated by private companies (platforms like Facebook and YouTube).
  • Pushing platforms to decide which opinions we are allowed to express and which not is not going to solve major problems in our society.
  • The EU rules on online content moderation are soon going to be reviewed. To ensure our right to freedom of expression, we need to make sure these updated rules will not encourage online platforms to over-removal of content to avoid being taken to court.

Your video in YouTube got removed, without a warning. Or the page you manage on Facebook was blocked because your posts breached the “community standards”. You’ve sent messages to the platform to sort this out, but there’s no reply, and you have no way of getting your contents back online. Maybe you’ve experienced this? Or if not, you surely know someone who has.

The internet is a great place – a sort of “public space” where everyone has equal possibilities to share their ideas, creations, and knowledge. However, the websites and platforms where we most frequently hang out, share and communicate, like Facebook, Twitter, Instagram or YouTube, are not actually public spaces. They are spaces controlled by private businesses, with private business interests. That’s why your page got blocked, and your video removed.

Anyone should be free to express their opinions and views, even if not everyone likes those opinions, as long as they aren’t breaking any laws. The problem is that the private businesses dominating our “public spaces online would rather delete anything that looks even remotely risky for them (a potential copyright infringement, for example). There are also financial interests: these businesses exist to make profit, and if certain content doesn’t please their ad business clients, they will likely limit its visibility on their platform. And they can easily do it, because they can use their arbitrary “terms of service” or “community standards” as a cover, without having to justify their decisions to anyone. This is why it shouldn’t be left for the online companies to decide what is illegal and what is not.

There’s an increasing trend to push online platforms to do more about “harmful” contents and to take more responsibility. However, obliging the platforms to remove contents is not going to solve the problems of online hate speech, violence, or polarisation of our societies. Rather than fiddling around trying to treat the symptoms, the focus should be on addressing the underlying societal problems.

Whenever contents are taken down, there’s always a risk that our freedom to express our opinions is being limited in an unjustified way. It is, however, better that the decisions about what you can say and what you can’t are done based on a law than on interests of a profit-seeking company.

There are rules in place that limit online companies’ legal responsibility for the contents users post or upload on their platforms. One of them is the EU E-Commerce Directive. To update the rules on how online services should deal with illegal and “harmful” content, the new European Commission will likely soon review it, and replace it by a new set of rules: Digital Services Act (DSA). To ensure we can keep our right to freedom of expression, we need to make sure these updated rules will not encourage online platforms to over-remove content.

When dealing with videos, texts, memes and other content online, we need to find a nuanced approach to treating the different types of content differently. How do you think the future of freedom of expression online should look like?

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

Facebook and Google’s pervasive surveillance poses an unprecedented danger to human rights (21.11.2019)
https://www.amnesty.org/en/latest/news/2019/11/google-facebook-surveillance-privacy/

LGBTQ YouTubers are suing YouTube over alleged discrimination (14.08.2019)
https://www.theverge.com/2019/8/14/20805283/lgbtq-youtuber-lawsuit-discrimination-alleged-video-recommendations-demonetization

(Contribution by Heini Järvinen, EDRi)

close
20 Nov 2019

ePrivacy hangs in the balance, but it’s not over yet…

By Ella Jakubowska

Unless you have been living under a rock (read: outside the “Brussels bubble”) you will likely be aware of the long and winding road on which the proposed ePrivacy Regulation has been for the last three years. This is not unusual for a piece of European Union (EU) legislation – the 2018 General Data Protection Regulation (GDPR) is a great example of the painful, imperfect, but ultimately fruitful processes that EU law goes through, in this case in a marathon spanning almost 25 years! Even now, Data Protection Authority (DPA) fines, litigation and regulatory reviews are testing the benefits and boundaries of GDPR, helping to shape it progressively into an even more effective piece of legislation.

Let us rewind to January 2017, when the European Commission delivered its long-awaited proposal for a Regulation on Privacy and Electronic Communications, also known as “ePrivacy”. In October of the same year, the European Parliament Committee on Civil Liberties, Justice and Home Affairs (LIBE) proposed a comprehensive series of improvements to the text in order to better protect fundamental rights. This included enhanced confidentiality of communications and privacy as a central foundation of online product and service design. We welcomed these amendments for their respect for and promotion of digital rights.

Unfortunately, the European Council has since seriously watered down the draft text, introducing worrying limits to the safeguards that ePrivacy offers for personal data and communications. In response to the worsening protections – and the negotiations lingering like a bad smell – EDRi, Access Now, Privacy International, and two other civil society organisations co-authored an open letter to the EU members states on 10 October 2019, urging them to swiftly adopt a strong ePrivacy Regulation. Yet the most recent European Council text has still not improved in any aspect. Concerningly, its introductory remarks use the emotive age-old arguments of child protection and terrorism to justify some vague “processing of communications data for preventing other serious crimes”. We believe this represents a slippery slope of surveillance and intrusion, and undermines the fundamental purpose of ePrivacy: protecting our fundamental right to privacy and confidentiality of communications.

The political stage of the file is now coming to a close after almost three painful years of back and forth. The imminent fate of the Council’s proposal will be decided on 22 November 2019 at the COREPER level. If the Member States vote to adopt the Council text, the file will move forward to the trilogue stage, where the States will be able to engage in technical discussions about the shape of the legislation. If the Member States vote to reject the Council text, however, all options – including the complete withdrawal of the ePrivacy proposal by the European Commission – will be on the table.

Despite these challenges, ePrivacy remains an essential piece of legislation for safeguarding fundamental rights in the online environment. Complementing the GDPR, a strong ePrivacy text can still protect the privacy of individuals, ensure mechanisms for meaningful consent, and establish rules on the role of each Member State’s Data Protection Authority (DPA) as their supervisory authority. It will embed privacy by design and default, making the internet a more secure space for everyone.

To quote an infamous political figure, we will not “die in a ditch” over ePrivacy. Whatever the outcome of the COREPER vote, we will continue to work tirelessly to secure the right to online privacy across Europe. So get your popcorn ready, stay tuned for the next episode in this epic saga, and be prepared in the event of some last minute plot-twists!

The History of the General Data Protection Regulation
https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en

e-Privacy revision: Document pool
https://edri.org/eprivacy-directive-document-pool/

EU Council considers undermining ePrivacy (25.07.2018)
https://edri.org/eu-council-considers-undermining-eprivacy/

Five reasons to be concerned about the Council ePrivacy draft (26.09.2018)
https://edri.org/five-reasons-to-be-concerned-about-the-council-eprivacy-draft/

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/open-letter-to-eu-member-states-deliver-eprivacy-now/

The most recent European Council ePrivacy text (15.11.2019)
https://www.politico.eu/wp-content/uploads/2019/11/file.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close
14 Nov 2019

“E-evidence”: Repairing the unrepairable

By EDRi

On 11 November 2019, Member of the European Parliament (MEP) Birgit Sippel (S&D), Rapporteur for the Committee on Civil Liberties, Justice and Home Affairs (LIBE) presented her draft Report, attempting to fix the many flaws of the European Commission’s “e-evidence” proposal. Has Sippel MEP been successful at repairing the unrepairable?

The initial e-evidence proposal by the Commission aims to allow law enforcement agencies across the EU to access electronic information more quickly by requesting it directly from online service providers in other EU countries. Unfortunately, the Commission forgot to build in meaningful human rights safeguards that would protect suspects and other affected persons from unwarranted data access.

The Commission proposal is not only harmful, but simply not needed at this point. To speed up cross-border access to data for law enforcement, there already is the European Investigation Order (EIO). It exists only since 2018 and has never been systematically evaluated, let alone improved.

From a fundamental rights perspective, the draft Report comes with a number of very important improvements. If adopted, they would help fixing some of the worst flaws in the original e-evidence proposal.

Here is what Member of the European Parliament (MEP) Birgit Sippel suggests, and what that means for fundamental rights:

👍 Framing is important. While the Commission’s proposal treats all information accessed under the new law as if it was admissible evidence, Sippel MEP recalls that what law enforcement actually accesses is people’s data. Only a fraction of that data is likely to be relevant for ongoing criminal proceedings. She therefore correctly proposes to replace “electronic evidence” with the more accurate term “electronic information”.

👍 One of the Commission proposal’s biggest flaws is that it would allow any law enforcement agency or court in the EU to force companies like email providers and social networks in other EU countries to directly hand over the personal information of their users. The judicial authorities of that other EU country would no longer be involved and would in fact never know about the data access. To mitigate those risks, Sippel MEP proposes a mandatory notification to the judicial authorities of the country in which the online provider is located. That way, authorities can intervene in cases that threaten fundamental rights and stop unwarranted data access requests.

👍 & 👎 Sippel MEP proposes that authorities requesting data must consult the judicial authorities of the country in which the affected person has their habitual place of residence “where it is clear” that the person whose data is sought is residing in another country. Involving the country of residence makes a lot of sense because only their authorities may know about particular protections a lawyer, doctor, or journalist has. Unfortunately, according to the draft Report, this consultation only needs to happen where it is clear that the affected person lives in another country—a term that is undefined and easy to bend.
🔧 How to repair it: The involvement of the country of residence should be mandatory when it’s known or could have been known that the person whose data is sought lives there.

👎 Although the judicial authorities of the affected person’s country of residence would be consulted in some instances under the proposal by Sippel MEP (see point above), their opinion in any given case would only be “duly taken into account”.
🔧 How to repair it: The authorities of the affected person’s country of residence should be able to block infringing foreign data requests. The affected person’s country of residence is usually best placed to protect their fundamental and procedural rights and to know about potential special protections of journalists, doctors, lawyers, and similar professions.

👍 The draft Report streamlines and fixes the skewed data definitions introduced by the Commission and brings them in line with existing EU legislation. “Traffic data” replaces former overlapping “access” and “transactional” data categories. IP addresses, which can be very revealing of private lives and daily habits, benefit from a higher protection level by being defined as traffic data.

👍 The draft Report introduces an extensive list of possible grounds for non-recognition or non-execution of foreign data access requests aimed at protecting accused persons from illegitimate requests. The grounds of refusal include the non-respect of the principles of ne bis in idem (one cannot be judged twice for the same offence) and of dual criminality (the investigated conduct need to be a criminal offence in all jurisdictions concerned).

👍 Sippel MEP proposes to extend the data access request instruments created by the new law to the defence of the suspected or accused person. This approach strengthens the principle of “equality of arms”, according to which the suspected or accused person should have a genuine opportunity to prepare and present their case in the event of a trial.

👍 The LIBE draft Report beefs up the rights of the affected person to obtain effective remedies and to a fair trial. She proposes that the person who is targeted by a data access request should be notified by default by the service provider, except in circumstances where such notification would negatively impact an investigation. In that case, the state requesting the data (issuing state) has to obtain a court order to receive it.

👎 Lastly, the draft Report fails to question whether direct cooperation with online service providers is at all needed. The Commission argues that direct cooperation for law enforcement is necessary to prevent relevant electronic evidence from being removed by suspects. However, the proposed instrument of a European Preservation Order would be less intrusive and most likely sufficient to achieve that aim (similar to a “quick data freeze” order).
🔧 How to repair it: The European Production Order Certificate (EPOC) should be completely removed from the law. Law enforcement agencies should use the European Preservation Order to quick-freeze data they believe could contain relevant electronic evidence. The acquisition of that data should be done through the safer channels of the European Investigation Order (EIO) and Mutual legal assistance treaty (MLAT).

LIBE draft Report on the “e-evidence” proposal (24.10.2019)
https://www.europarl.europa.eu/doceo/document/LIBE-PR-642987_EN.pdf

EDRi Recommendations on cross-border access to data (25.04.2019)
https://edri.org/files/e-evidence/20190425-EDRi_PositionPaper_e-evidence_final.pdf

Cross-border access to data for law enforcement: Document pool
https://edri.org/cross-border-access-to-data-for-law-enforcement-document-pool/

EDPS opinion on Proposals regarding European Production and Preservation Orders for electronic evidence in criminal matters (06.11.2019)
https://edps.europa.eu/sites/edp/files/publication/opinion_on_e_evidence_proposals_en.pdf

EU rushes into e-evidence negotiations without common position (19.06.2019)
https://edri.org/eu-rushes-into-e-evidence-negotiations-without-common-position/

(Contribution by Jan Penfrat and Chloé Berthélémy, EDRi)

close
12 Nov 2019

EDRi is looking for a Senior Policy Advisor

By EDRi

European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations from across Europe and beyond. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.

EDRi is looking for a talented and dedicated Senior Policy Advisor to join EDRi’s team in Brussels. This is a unique opportunity to be part of a growing and well-respected NGO that is making a real difference in the defence and promotion of online rights and freedoms in Europe and beyond. The deadline to apply is 2 December 2019.

Key responsibilities:

As a Senior Policy Advisor, your main tasks will be to:

  • Monitor, analyse and report about human rights implications of EU digital policy developments;
  • Advocate for the protection of digital rights, particularly but not exclusively in the areas of artificial intelligence, data protection, privacy, net neutrality and copyright;
  • Provide policy-makers with expert, timely and accurate input;
  • Draft policy documents, such as briefings, position papers, amendments, advocacy one-pagers, letters, blogposts and EDRi-gram articles;
  • Provide EDRi members with information about EU’s relevant legislative processes, coordinate working groups, help developing campaign messages and providing the public with information about EU’s relevant legislative processes and EDRi’s activities.
  • Represent EDRi at European and global events;
  • Organise and participate in expert meetings;
  • Maintain good relationships with policy-makers, stakeholders and the press;
  • Support and work closely with other staff members including policy, communications and campaigns colleagues and report to the Head of Policy and to the Executive Director;
  • Contribute to the policy strategy of the organisation;

Desired qualifications and experience:

  • Minimum 3 years of relevant experience in a similar role or EU institution;
  • A university degree in law, EU affairs, policy, human rights or related field or equivalent experience;
  • Demonstrable knowledge of, and interest in data protection, privacy and copyright, as well as other internet policy issues;
  • Knowledge and understanding of the EU, its institutions and its role in digital rights policies;
  • Experience in leading advocacy efforts and creating networks of influence;
  • Exceptional written and oral communications skills;
  • IT skills; experience using free software and free/open operation systems, WordPress and Nextcloud are an asset;
  • Strong multitasking abilities and ability to manage multiple deadlines;
  • Experience of working with and in small teams;
  • Experience of organising events and/or workshops;
  • Ability to work in English. Other European languages, especially French, is an advantage.

What EDRi offers:

  • A permanent, full-time contract;
  • Salary: 3 200 euros gross per month;
  • A dynamic, multicultural and enthusiastic team of experts based in Brussels;
  • The opportunity to foster the protection of fundamental rights in important legislative proposals;
  • A high degree of autonomy and flexibility;
  • An international and diverse network;
  • Networking opportunities.

Starting date: as soon as possible

How to apply:

To apply, please send a maximum one-page cover letter and a maximum two-page CV in English and in .pdf format to applications (at) edri (dot) org with “Senior Policy Advisor” in the subject line by 2 December 2019 (11.59 pm). Candidates will be expected to be available for interviews on the week of 11th December.

We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment and ideally, we would like to strive for a gender balance in the policy team. Therefore, we particularly encourage applications from individuals who identify as women. We also encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.

Please note that only shortlisted candidates will be contacted.

close
06 Nov 2019

Why tech is not “just a tool”

By Ella Jakubowska

Throughout October 2019, digital rights-watchers welcomed new reports warning about the human rights crises of Artificial Intelligence (AI) and other digital technologies. From Philip Alston’s caution that the UK risks “stumbling zombie-like into a digital welfare dystopia” to David Kaye’s critique of internet companies’ and States’ failure to respect human rights online, civil society is increasingly demanding greater insight into the impact of technology on society. Individuals who do not work on “digital rights” are also becoming progressively more aware of the exponentially increasing power and control of technology giants such as Facebook and Google.

Whilst every citizen is and will continue to be affected (whether positively or negatively) by the rise of technology for everyday services, the risks are becoming more evident for some of the groups that already suffer systematic discrimination. Take this woman who was automatically barred from entering her gym because the system did not recognise that she could be both a doctor, and a woman; or this evidence that people of colour get worse medical treatment when decisions are made by algorithms. Not to mention the environmental and human impact of mining precious metals for smartphones (which disproportionately impacts the global south) and the incredibly high emissions released by training just one single algorithm. The list, sadly, goes on and on.

The idea that human beings are biased is hardly a surprise. Most of us make “implicit associations”, unconscious assumptions and stereotypes about the things and the people that we see in the world. According to some scientists, there are evolutionary reasons for this, in order to allow our ancestors to distinguish between friends and foes. These biases, however, become problematic when they lead to unfair or discriminatory treatment – certain groups being surveilled more closely, censored more frequently, or punished more harshly. In the context of human rights in the online environment, this matters because everyone has a right to equal access to privacy, to free speech, and to justice.

States are the actors that are responsible for respecting and protecting their citizens’ human rights. Typically, representatives of a state (such as social workers, judges, police and parole officers) are responsible for making decisions that can impact citizens’ rights: working out the amount of benefits that a person will receive, deciding on the length of a prison sentence, or making a prediction about the likelihood of them re-offending. Increasingly, however, these decisions are starting to be made by algorithms.

Many well-meaning people have fallen into the trap of thinking that tech, with its structured 1s and 0s, removes humans’ messy bias, and allows us to make better, fairer decisions. Yet technology is made by humans, and we unconsciously build our world views into the technology that we produce. This encodes and amplifies underlying biases, whilst outwardly giving the appearance of being “neutral”. Even the data that is used to train algorithms or to make decisions reflects a particular social history. And if that history is racist, or sexist, or ableist? You guessed it: this past discrimination will continue to impact the decisions that are made today.

The decisions made by social workers, police and judges are, of course, frequently difficult, imperfect, and susceptible to human bias too. But they are made by state representatives with an awareness of the social context of their decision, and crucially, an ability to be challenged by the impacted citizen – and overturned if an appropriate authority feels they have judged incorrectly. Humans also have a nifty way of being able to learn from mistakes so that they do not repeat them in the future. Machines making these decisions do not “learn” in the same way as humans: they “learn” to get more precise with their bias, and they lack the self-awareness to know that it leads to discrimination. To make things worse, many algorithms that are used for public services are currently protected under intellectual property laws. This means that citizens do not have a route to challenge decisions that an algorithm has made about them. Recent cases such as Loomis v. Wisconsin, which saw a citizen challenge a prison sentence determined by the US’s COMPAS algorithm, have worryingly ruled in favour of upholding the algorithm’s proprietary protections, refusing to reveal how the sentencing decision was made.

Technology is not just a tool, but a social product. It is not intrinsically good or bad, but it is embedded with the views and biases of its makers. It uses flawed data to make assumptions about who you are, which can impact the world that you see. Another example of this is the use of highly personalised adverts in the EU, which may breach our fundamental right to privacy. Technology cannot – at least for now – make fair decisions that require judgement or assessment of human qualities. When it comes to granting or denying access to services and rights, this is even more important. Humans can be aware of their bias, work towards mitigating it, and challenge it when they see it in others. For anyone creating, buying or using algorithms, active measures about how the tech will impact social justice and human rights must be at the heart of design and use.

Hate speech online: Lessons for protecting free expression (29.10.2019)
https://edri.org/hate-speech-online-lessons-for-protecting-free-expression/

Millions of black people affected by racial bias in health-care algorithms (24.10.2019)
https://www.nature.com/articles/d41586-019-03228-6

Anatomy of an AI System
https://anatomyof.ai/

Profiling the unemployed in Poland: Social and political implications of algorithmic decision making
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

Project Implicit
https://implicit.harvard.edu/implicit/takeatest.html

Digital dystopia: how algorithms punish the poor (14.10.2019)
https://www.theguardian.com/technology/2019/oct/14/automating-poverty-algorithms-punish-poor

(Contribution by Ella Jakubowska, EDRi intern)

close
06 Nov 2019

Danish data retention: Back to normal after major crisis

By IT-Pol

The Danish police and the Ministry of Justice consider access to electronic communications data to be a crucial tool for investigation and prosecution of criminal offences. Legal requirements for blanket data retention, which originally transposed the EU Data Retention Directive, are still in place in Denmark, despite the judgments from the Court of Justice of the European Union (CJEU) in 2014 and 2016 that declared general and indiscriminate data retention illegal under EU law.

In March 2017, in the aftermath of the Tele2 judgment, the Danish Minister of Justice informed the Parliament that it was necessary to amend the Danish data retention law. However, when it comes to illegal data retention, the political willingness to uphold the rule of law seems to be low – every year the revision is postponed by the Danish government with consent from Parliament, citing various formal excuses. Currently, the Danish government is officially hoping that the CJEU will revise the jurisprudence of the Tele2 judgment in the new data retention cases from Belgium, France and the United Kingdom which are expected to be decided in May 2020. This latest postponement, announced on 1 October 2019, barely caught any media attention.

However, data retention has been almost constantly in the news for other reasons since 17 June 2019 when it was revealed to the public that flawed electronic communications data had been used as evidence in up to 10000 police investigations and criminal trials since 2012. Quickly dubbed the “telecommunications data scandal” by the media, the ramifications of the case have revealed severely inadequate data management practices by the Danish police for almost ten years. This is obviously very concerning for the functioning of the criminal justice system and the right to a fair trial, but also rather surprising in light of the consistent official position of the Danish police that access to telecommunications data is a crucial tool for investigation of criminal offences. The mismatch between the public claims of access to telecommunications data being crucial, and the attention devoted to proper data management, could hardly be any bigger.

According to the initial reports in June 2019, the flawed data was caused by an IT system used by the Danish police to convert telecommunications data from different mobile service providers to a common format. Apparently, the IT system sometimes discarded parts of the data received from mobile service providers. During the Summer of 2019, a new source of error was identified. In some cases, the data conversion system had modified the geolocation position of mobile towers by up to 200 meters.

Based on the new information of involuntary evidence tampering, the Director of Public Prosecutions decided on 18 August 2019 to impose a temporary two-month ban on the use of telecommunications data as evidence in criminal trials and pre-trial detention cases. Somewhat inconsequential, the police could still use the potentially flawed data for investigative purposes. Since telecommunications data are frequently used in criminal trials in Denmark, for example as evidence that the indicted person was in the vicinity of the crime scene, the two-month moratorium caused a number of criminal trials to be postponed. Furthermore, about 30 persons were released from pre-trial detention, something that generated media attention even outside Denmark.

In late August 2019, the Danish National Police commissioned the consultancy firm Deloitte to conduct an external investigation of its handling of telecommunications data and to provide recommendations for improving the data management practices. The report from Deloitte was published on 3 October 2019, together with statements from the Danish National Police, the Director of Public Prosecutions, and the Ministry of Justice.

The first part of the report identifies the main technical and organisational causes for the flawed data. The IT system used for converting telecommunications data to a common format contained a timer which sometimes submitted the converted data to the police investigator before the conversion job was completed. This explains, at least at technical level, why parts of the data received from mobile service providers were sometimes discarded. The timer error mainly affected large data sets, such as mobile tower dumps (information about all mobile devices in a certain geographical area and time period) and access to historical location data for individual subscribers.

The flaws in the geolocation information for mobile towers that triggered the August moratorium were traced to errors in the conversion of geographical coordinates. Mobile service providers in Denmark use two different systems for geographical coordinates, and the police uses a third system internally. During a short period in 2016, the conversion algorithm was applied twice to some mobile tower data, which moved the geolocation positions by a couple of hundred meters.

On the face of it, these errors in the IT system should be relatively straightforward to correct, but the Deloitte report also identifies more fundamental deficiencies in the police practices of handling telecommunications data. In short, the report describes the IT systems and the associated IT infrastructure as complex, outdated, and difficult to maintain. The IT system used for converting telecommunications data was developed internally by the police and maintained by a single employee. Before December 2018, there were no administrative practices for quality control of the data conversion system, not even simple checks to ensure that the entire data set received from mobile service providers had been properly converted.

The only viable solution for the Danish police, according to the assessment in the report, is to develop an entirely new infrastructure for handling telecommunications data. Deloitte recommends that the new infrastructure should be based on standard software elements which are accepted globally, rather than internally developed systems which cannot be verified. Concretely, the reports suggests using POL-INTEL, a big data policing system supplied by Palantir Technologies, for the new IT infrastructure. In the short term, some investment in the existing infrastructure will be necessary in order to improve the stability of the legacy IT systems and reduce the risk of creating new data flaws. Finally, the report recommends systematic independent quality control and data validation by an external vendor. The Danish National Police has accepted all recommendations in the report.

Deloitte also delivered a short briefing note about the use of telecommunications data in criminal cases. The briefing note, intended for police investigators, prosecutors, defence lawyers and judges, explains the basic use cases of telecommunications data in police investigations, as well as information about how the data is generated in mobile networks. The possible uncertainties and limitations of telecommunications data are also mentioned. For example, it is pointed out that mobile devices do not necessarily connect to the nearest mobile tower, so it cannot simply be assumed that the user of the device is close to the mobile tower with almost “GPS level” accuracy. This addresses a frequent critique against the police and prosecutors for overstating the accuracy of mobile location data – an issue that was covered in depth by the newspaper Information in a series of articles in 2015. Quite interestingly, the briefing note also mentions the possibility of spoofing telephone numbers, so that the incoming telephone call or text message may originate from a different source than the telephone number registered by the mobile service provider under its data retention obligation.

On 16 October 2019, the Director of Public Prosecutions decided not to extend the moratorium on the use of telecommunications data. Along with this decision, the Director issued new and more specific instructions for prosecutors regarding the use of telecommunications data. The Deloitte briefing note should be part of the criminal case (and distributed to the defence lawyer), and police investigators are required to present a quality control report to prosecutors with an assessment of possible sources of error and uncertainty in the interpretation of the telecommunications data used in the case. Documentation of telecommunications data evidence should, to the extent possible, be based on the raw data received from mobile service providers and not the converted data.

For law enforcement, the October 16 decision marks the end of the data retention crisis which erupted in public four months earlier. However, only the most imminent problems at the technical level have really been addressed, and several of the underlying causes of the crisis are still looming under the surface, for example the severely inadequate IT infrastructure used by the Danish police for handling telecommunications data. The Minister of Justice has announced further initiatives, including investment in new IT systems, organisational changes to improve the focus on data management, improved training for police investigators in the proper use and interpretation of telecommunications data, and the creation of a new independent supervisory authority for technical investigation methods used by the police.

Denmark: Our data retention law is illegal, but we keep it for now (08.03.2017)
https://edri.org/denmark-our-data-retention-law-is-illegal-but-we-keep-it-for-now/

Denmark frees 32 inmates over flaws in phone geolocation evidence, The Guardian (12.09.2019)
https://www.theguardian.com/world/2019/sep/12/denmark-frees-32-inmates-over-flawed-geolocation-revelations

Response from the Minister of Justice to the reports on telecommunications data (in Danish only, 03.10.2019)
http://www.justitsministeriet.dk/nyt-og-presse/pressemeddelelser/2019/justitsministerens-reaktion-paa-teledata-redegoerelser

Can cell tower data be trusted as evidence? Blog post by the journalist covering telecommunications data for the newspaper Information (26.09.2015)
https://andreas-rasmussen.dk/2015/09/26/can-cell-tower-data-be-trusted-as-evidence/

(Contribution by Jesper Lund, EDRi member IT-pol, Denmark)

close
06 Nov 2019

Portuguese ISPs ignore telecom regulator’s recommendations

By D3 Defesa dos Direitos Digitais

In 2018, the Portuguese telecom regulator ANACOM told the three major Portuguese mobile Internet Service Providers (ISPs) to change offers that were in breach of EU net neutrality rules. Among other things, the regulator recommended that ISPs publish their terms and conditions, and increase the data volume of their mobile data packs in order to bring it closer to their zero-rating offer. In Portugal, average mobile data volumes are small, yet among the most expensive in Europe. ANACOM’s net neutrality report that was published in June 2019 reveals how the ISPs reacted to the regulator’s intervention.

While operators have complied with ANACOM’s decision on differential treatment of traffic after the general data ceiling has been exhausted, that was as far as they went. Regarding the increase of data volume, all three major operators simply ignored ANACOM’s demand. None of them changed their offers. One of the operators claimed, instead, that “the current ceiling is adjusted to the demand”.

Then, ANACOM had asked the ISPs to publish the terms and conditions under which other companies and their applications can be included in the their zero-rating packages. The result: All operators ignored this recommendation, too.

Surprisingly, the regulator’s reaction was lukewarm, at best. Instead of strongly criticising the ISPs for not complying to its recommendations, it stated that it “will continue to monitor all matters concerning these recommendations”, and that this will be followed up with “further analysis in the context of net neutrality […]”.

Portuguese EDRi observer D3 Defesa dos Direitos Digitais regrets the lack of will and courage on the part of ANACOM to put an end to the harmful practices of ISPs. Zero-rating harms consumers and free competition by tilting the playing field in favour of a few selected, dominant applications, and it constitutes a threat to a free and neutral internet. By not acting against price discrimination practices between applications and restricting its action to technical discrimination of traffic, ANACOM shows no intention to act on the underlying problem of zero-rating offers.

The result is that in Portugal, mobile data volumes are on average small, and the prices are among the highest in Europe. Users suffer from an over-concentrated market – three major ISPs share 98% of the market. In this setting, the leading companies can afford to ignore the regulator’s public recommendations without practical consequences. The legislator has not introduced the fines for net neutrality infringements that are mandatory under EU law since 2015.

This article is an adaptation of an article published at:
https://en.epicenter.works/content/zero-rating-in-portugal-permissive-regulator-allows-isp-to-get-away-with-offering-some-of

D3 Defesa dos Direitos Digitais
https://www.direitosdigitais.pt/

epicenter.works
https://en.epicenter.works/

Portuguese ISPs given 40 days to comply with EU net neutrality rules (07.03.1018)
https://edri.org/portuguese-isps-given-40-days-to-comply-with-eu-net-neutrality-rules/

Civil society urges Portuguese telecom regulator to uphold net neutrality (23.04.2018)
https://edri.org/civil-society-urges-portuguese-telecom-regulator-uphold-net-neutrality/

(Contribution by Eduardo Santos, EDRi observer D3 Defesa dos Direitos Digitais, Portugal)

close
06 Nov 2019

Twitter banning political ads – the tip of the iceberg

By Chloé Berthélémy

Twitter seems to have learnt the lessons of the 2016 US elections. After the revelation of the Cambridge Analytica scandal, the link between the use of social media targeted political advertisement and the voting behaviour of specific groups of people has been explored and explained again and again. We now understand how social media platforms like Facebook and Twitter play a decisive role in our elections and other democratic processes, and how misleading information, spreading faster and further than true stories on those platforms, can remarkably manipulate voters.

When Facebook CEO Marc Zuckerberg was grilled by Representative Alexandria Ocasio-Cortez in a hearing of the United States House Committee on Financial Services on 23 October, he admitted that if Republicans would pay for spreading a lie on their services, it would probably not be prohibited. Political advertisements are not subjected to any fact-checking review which could theoretically lead to the refusal or the blocking of this promoted content. According to Zuckerberg’s vision, if a politician lies, an open public debate helps exposing these lies and the electorate holds the politician accountable by rejecting her or his ideas. The principle of free speech departs from this very idea that all statements should be debated, and the bad ones would be naturally put aside. The only problem is that neither Facebook nor Twitter provides an infrastructure for such an open public debate.

These companies do not display content in a neutral and universal way to everybody. What one sees reflects what their personal data have been revealing about their life, preferences and habits. Information is broadcast to each user in a selective, narrowly defined manner, in line with what the algorithms have concluded about that person’s past online activity. Hence, so-called “filter bubbles”, combined with human inclination for confirmation bias, capture individuals in restricted information environments. These prevent people from forming opinions based on diversified sources of information – a core principle of open public debate.

Some parties in this discussion would like to officially acknowledge the critical infrastructure status dominant social media have in our societies, considering their platforms as the new place where the public sphere is taking place. This would imply applying to social media platforms the existing laws on TV channels and radio broadcasters that require them to carry certain types of content and to exclude others. Considering the amount of content posted every minute of each of those platforms, the recourse to automatic filtering measures would be inevitable. This would also cement their power over people’s speech and thoughts.

Banning political ads is a positive step towards reducing the harm caused by the amplification of false information. However, this measure is still missing the point: the most crucial problem is micro-targeting. Banning political ads is unlikely to stop micro-targeting, since that‘s the business model of all the main social media companies, including Twitter.

The first step of micro-targeting is profiling. Profiling consists of collecting as much data as possible on each user to build behavioural tracking profiles – it was proven that Facebook has expanded this collection to even those who aren’t using their platform. Profiling is enabled by keeping the user trapped on the platform and inciting as much attention and “engagement” as possible. The “attention economy” relies on content that keep us scrolling, commenting and clicking. Which content does the job is predicted based on our tracking profiles. Usually it’s offensive, shocking and polarizing content. This is why political content is one of the most effective at maximizing profits. No need for it to be paid for.

Twitter CEO Jack Dorsey is right in affirming that this is not a freedom of expression issue, but rather an outreach question, to which no fundamental right exists. To the contrary, rights to data protection and to privacy are human rights, and it is high time for the European Union to substantiate them against harmful profiling practices. A step towards that would be to adopt a strong ePrivacy Regulation. This piece of legislation would reinforce the safeguards the General Data Protection Regulation (GDPR) introduced. It would ensure that privacy by design and by default are guaranteed. Finally, it would tackle the perversive model of online tracking.

Right a wrong: ePrivacy now! (9.10.2019)
https://edri.org/right-a-wrong-eprivacy-now/

Open letter to EU Member States: Deliver ePrivacy now! (10.10.2019)
https://edri.org/tag/eprivacy-regulation/

Civil society calls Council to adopt ePrivacy now (5.12.2018)
https://edri.org/civil-society-calls-council-to-adopt-eprivacy-now/

EU elections – protecting our data to protect us from manipulation (08.05.2019)
https://edri.org/eu-elections-protecting-our-data-to-protect-us-from-manipulation/

(Contribution by Chloé Berthélémy, EDRi)

close
06 Nov 2019

Greece: The new data protection law raises concerns

By Homo Digitalis

On 29 August 2019, the much awaited new Greek data protection law came into force. Τhis law (4624/2019), implements both the provisions of the EU Law Enforcement Directive (LED, 2016/680) and the General Data Protection Regulation (GDPR) into national level. However, since the first days after the law was adopted, a lot of criticism was voiced concerning the lack of conformity of its provisions with the GDPR.

The Greek data protection law was adopted following the Εuropean Commission’s decision of July 2019 to refer Greece to the Court of Justice of the European Union (CJEU) for not transposing the LED on time. Thus, the national authorities acted fast in order to adopt a new data protection law. Unfortunately, the process was rushed through. As a result, the new data protection law suffers from important shortcomings and includes Articles that are challenging the provisions of the LED or even the GDPR.

In September 2019, Greek EDRi observer Homo Digitalis, together with a Greek consumer protection organisation EKPIZO, sent a common request to the Hellenic data protection authority (DPA) asking it to issue an Opinion on the conformity of the Greek law with the provisions of the LED and the GDPR. The DPA issued a press statement in early October 2019 announcing that it will come up with an Opinion in due time. Moreover, on 24 October 2019; Homo Digitalis filed a new complaint to the European Commission regarding the provisions of the Greek data protection law that are challenging the EU data protection regime.

Moreover, in order to acquire a thorough view on the Greek law, Homo Digitalis reached out to one of the most prominent privacy and data protection law experts in Greece, Professor Lilian Mitrou, who kindly shared her thoughts on the positive and one negative aspects of the new data protection law.

Professor Mitrou states that, on the positive side, the Greek legislator has introduced further limitations to the processing of sensitive data (genetic data, biometric data or data concerning health). Thus, according to the Article 23 of the new Greek law, the processing of genetic data for health and life insurances is expressly prohibited. “In this respect the Greek law, by stipulating prohibition on the use of genetic findings in the sphere of insurance, precludes the risk of results of genetic diagnosis being used to discriminate against people,” she says.

However, a strong point of criticism relates to the provisions concerning the purpose alienation. The Greek law introduces very wide and vague exceptions from the purpose limitation principle that prohibits the further use of data for incompatible purposes. “For example, private entities are allowed to process personal data for preventing threats against national or public security upon request of a public entity. Serious concerns are raised also with regard to the limitations of the data subjects’ rights,” Professor Mitrou points out.

She reminds that the Greek legislator “has made extensive use of the limitations permitted by Article 23 of the GDPR to restrict the right to information, the right to access and the right to rectification and erasure”. However, these restrictions have been adopted without fully complying with the safeguards provided in Article 23, para 2 GDPR. Moreover, the Greek law introduces provisions that allow the data controller not to erase data upon request of the data subject, in case the controller has reason to believe that erasure would adversely affect legitimate interests of the data subject. Thus, the data controller is allowed by the Greek legislator to substitute the will of the data subject.

“The Greek law has not respected the GDPR as standard borderline and has (mis)used ‘opening clauses’ and Member State discretion not to enhance but to reduce the level of data protection,” Professor Mitrou concludes.

Homo Digitalis
https://www.homodigitalis.gr/

Professor Lilian Mitrou
https://www.icsd.aegean.gr/group/members-data.php?group=L1&member=47&fbclid=IwAR3LWksLRO0Yp1JCNWaGp-UODEeyALxtDHYOUo7Tg7kQ_CtGXfS2l8Z-cxw

The data protection law 4624/2019 (only in Greek 29.08.2019)
https://www.kodiko.gr/nomologia/document_navigation/552084/nomos-4624-2019

Official Request to the Hellenic Data Protection Authority for the issuance of legal opinion on Law 4624/2019 (20.09.2019)
https://www.homodigitalis.gr/en/posts/4217

Homo Digitalis on a seminar discussion regarding Law 4624/2019 (24.09.2019)
https://www.homodigitalis.gr/en/posts/4232

Homo Digitalis’ complaint to the European Commission against the new data protection law 4624/2019 (24.10.2019)
https://www.homodigitalis.gr/source_content/uploads/2019/11/Complaint-Form-for-breach-of-EU-law_24October2019.pdf

(Contribution by Eleftherios Chelioudakis, EDRi observer Homo Digitalis, Greece)

close
29 Oct 2019

Hate speech online: Lessons for protecting free expression

By Ella Jakubowska

On 21 October, David Kaye – UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression – released the preliminary findings of his sixth report on information and communication technology. They include tangible suggestions to internet companies and states whose current efforts to control hate speech online are failing to comply with the fundamental principles of human rights. The EU Commission should consider Kaye’s recommendations when creating new rules for the internet and – most importantly – when drafting the Digital Services Act (DSA).

The “Report of the Special Rapporteur to the General Assembly on online hate speech” (docx) draws on international legal instruments on civil, political and non-discrimination rights to show how human rights law already provides a robust framework for tackling hate speech online. The report offers an incisive critique of platform business models which, supported by States, profit from the spread of “hateful content” whilst violating free expression by wantonly deleting legal content. Instead, Kaye offers a blueprint for tackling hate speech in a way which empowers citizens, protects online freedom, and puts the burden of proof on States, not users. Whilst the report outlines a general approach, the European Commission should incorporate Kaye’s advice when developing the proposed Digital Services Act (DSA) and other related legislation and non-legal initiatives, to ensure that the regulation of hate speech does not inadvertently violate citizens’ digital rights.

Harmful content removal: under international law, there is a better way

Sexism, racism and other forms of hate speech (which Kaye defines as “incitement to discrimination, hostility or violence”) in the online environment are quite rightly areas of attention for global digital policy and law makers. But the report offers a much-needed reminder that restricting freedom of expression online through deleting content is not just an ineffective solution, but in fact threatens a multitude of rights and freedoms that are vital for the functioning of democratic societies. Freedom of expression is, as Kaye states, “fundamental to the enjoyment of all human rights”. If curtailed, it can open the door for repressive States to systematically suppress their citizens. Kaye gives the example of blasphemy laws: profanity, whilst offensive, must be protected – otherwise it can be used to punish and silence citizens that do not conform to a particular religion. And others such as journalist Glenn Greenwald have already pointed out in the past how “hate speech” legislation is used in the EU to suppress left-wing viewpoints.

Fundamental rules for restricting freedom of expression online

The report is clear that restrictions of online speech “must be exceptional, subject to narrow conditions and strict oversight”, with the burden of proof “on the authority restricting speech to justify the restriction”. Any restriction is thus subject to three criteria under human rights law:

Firstly under the legality criteria, Kaye uses human rights law to show that any regulation of hate speech online (as offline) must be genuinely unlawful, not just offensive or harmful. It must be regulated in a way that does not give “excessive discretion” to governments or private actors, and gives independent routes of appeal to impacted individuals. Conversely, the current situation gives de facto regulatory power to internet companies by allowing (and even pressuring) them to act as the arbiters of what does and does not constitute free speech. Coupled with error-prone automated filters and short takedown periods incentivising over-removal of content, this is a free speech crisis in motion.

Secondly on the question of legitimacy, the report outlines the requirement for online hate speech laws and policies to be treated in the same way as any other speech. This means ensuring that freedom of expression is restricted only for legitimate interests, and not curtailed for “illegitimate purposes” like suppressing criticism of States. Potential illegal suppression is enabled by overly broad definitions of hate speech, which can act as a catch-all for content that States find offensive, despite being legal. A lack of strict definitions in the counter-terrorism policy field has already had a strong impact on freedom of expression in Spain, for example. “National security” was proven to be abusively invoked to justify measures interfering in human rights, and used as a pretext to adopt vague and arbitrary limitations.

Lastly, necessity and proportionality are violated by current moderation practices including “nearly immediate takedown” requirements and automatic filters which clumsily censor legal content, becoming collateral damage in a war against hate speech. This violates rights to due process and redress, and unnecessarily puts the burden of justifying content on users. Worryingly, Kaye continues that “such filters disproportionately harm historically under-represented communities.”

A rational approach to tackling hate speech online

The report offers a wide range of solutions for tackling hate speech whilst avoiding content deletion or internet shutdowns. Guided by human rights documents including the so-called “Ruggie Principles” (the 2011 UN Guiding Principles on Business and Human Rights), the report emphasises that internet companies need to exercise a greater degree of human rights due diligence. This includes transparent review processes, human rights impact assessments, clear routes of appeal and human, rather than algorithmic, decision-making. Crucially, Kaye calls on internet platforms to “de-monetiz[e] harmful content” in order to counteract the business models that profit from viral, provocative, harmful content. He stresses that the biggest internet companies must bear the cost of developing solutions, and share them with smaller companies to ensure that fair competition is protected.

The report is also clear that States must take more responsibility, working in collaboration with the public to put in place clear laws and standards for internet companies, educational measures, and remedies (both judicial and non-judicial) in line with international human rights law. In particular, they must take care when developing intermediary liability laws to ensure that internet companies are not forced to delete legal content.

The report gives powerful lessons for the future DSA and other related policy initiatives. In the protection of fundamental human rights, we must limit content deletion (especially automated) and avoid measures that make internet companies de facto regulators: they are not – and nor would we want them to be – human rights decision-makers. We must take the burden of proof away from citizens, and create transparent routes for redress. Finally, we must remember that the human rights rules of the offline world apply just as strongly online.

Report of the Special Rapporteur on the promotion and protection of the freedom of opinion and expression, A/74/486 (Advanced unedited report)
https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx

E-Commerce review: Opening Pandora’s box? (20.06.2019)
https://edri.org/e-commerce-review-1-pandoras-box/

In Europe, Hate Speech Laws are Often Used to Suppress and Punish Left-Wing Viewpoints (29.08.2017)
https://theintercept.com/2017/08/29/in-europe-hate-speech-laws-are-often-used-to-suppress-and-punish-left-wing-viewpoints/

EU copyright dialogues: The next battleground to prevent upload filters (18.10.2019)
https://edri.org/eu-copyright-dialogues-the-next-battleground-to-prevent-upload-filters/

Spain: Tweet… if you dare: How counter-terrorism laws restrict freedom of expression in Spain (13.03.2018)
https://www.amnesty.org/en/documents/eur41/7924/2018/en/

CCBE Recommendations on the protection of fundamental rights in the context of ‘national security’ 2019
https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/SURVEILLANCE/SVL_Guides_recommendations/EN_SVL_20190329_CCBE-Recommendations-on-the-protection-of-fundamental-rights-in-the-context-of-national-security.pdf

(Contribution by Ella Jakubowska, EDRi intern)

close