European Digital Rights (EDRi) is an international not-for-profit association of 42 digital human rights organisations from across Europe and beyond. We defend and promote rights and freedoms in the digital environment, such as the right to privacy, personal data protection, freedom of expression, and access to information.
EDRi is looking for a talented and dedicated Senior Policy Advisor to join EDRi’s team in Brussels. This is a unique opportunity to be part of a growing and well-respected NGO that is making a real difference in the defence and promotion of online rights and freedoms in Europe and beyond. The deadline to apply is 2 December 2019.
As a Senior Policy Advisor, your main tasks will be to:
Monitor, analyse and report about human rights implications of EU digital policy developments;
Advocate for the protection of digital rights, particularly but not exclusively in the areas of artificial intelligence, data protection, privacy, net neutrality and copyright;
Provide policy-makers with expert, timely and accurate input;
Draft policy documents, such as briefings, position papers, amendments, advocacy one-pagers, letters, blogposts and EDRi-gram articles;
Provide EDRi members with information about EU’s relevant legislative processes, coordinate working groups, help developing campaign messages and providing the public with information about EU’s relevant legislative processes and EDRi’s activities.
Represent EDRi at European and global events;
Organise and participate in expert meetings;
Maintain good relationships with policy-makers, stakeholders and the press;
Support and work closely with other staff members including policy, communications and campaigns colleagues and report to the Head of Policy and to the Executive Director;
Contribute to the policy strategy of the organisation;
Desired qualifications and experience:
Minimum 3 years of relevant experience in a similar role or EU institution;
A university degree in law, EU affairs, policy, human rights or related field or equivalent experience;
Demonstrable knowledge of, and interest in data protection, privacy and copyright, as well as other internet policy issues;
Knowledge and understanding of the EU, its institutions and its role in digital rights policies;
Experience in leading advocacy efforts and creating networks of influence;
Exceptional written and oral communications skills;
IT skills; experience using free software and free/open operation systems, WordPress and Nextcloud are an asset;
Strong multitasking abilities and ability to manage multiple deadlines;
Experience of working with and in small teams;
Experience of organising events and/or workshops;
Ability to work in English. Other European languages, especially French, is an advantage.
What EDRi offers:
A permanent, full-time contract;
Salary: 3 200 euros gross per month;
A dynamic, multicultural and enthusiastic team of experts based in Brussels;
The opportunity to foster the protection of fundamental rights in important legislative proposals;
A high degree of autonomy and flexibility;
An international and diverse network;
Starting date: as soon as possible
How to apply:
To apply, please send a maximum one-page cover letter and a maximum two-page CV in English and in .pdf format to applications (at) edri (dot) org with “Senior Policy Advisor” in the subject line by 2 December 2019 (11.59 pm). Candidates will be expected to be available for interviews on the week of 11th December.
We are an equal opportunities employer with a strong commitment to transparency and inclusion. We strive to have a diverse and inclusive working environment and ideally, we would like to strive for a gender balance in the policy team. Therefore, we particularly encourage applications from individuals who identify as women. We also encourage individual members of groups at risk of racism or other forms of discrimination to apply for this post.
Please note that only shortlisted candidates will be contacted.
Whilst every citizen is and will continue to be affected (whether positively or negatively) by the rise of technology for everyday services, the risks are becoming more evident for some of the groups that already suffer systematic discrimination. Take this woman who was automatically barred from entering her gym because the system did not recognise that she could be both a doctor, and a woman; or this evidence that people of colour get worse medical treatment when decisions are made by algorithms. Not to mention the environmental and human impact of mining precious metals for smartphones (which disproportionately impacts the global south) and the incredibly high emissions released by training just one single algorithm. The list, sadly, goes on and on.
The idea that human beings are biased is hardly a surprise. Most of us make “implicit associations”, unconscious assumptions and stereotypes about the things and the people that we see in the world. According to some scientists, there are evolutionary reasons for this, in order to allow our ancestors to distinguish between friends and foes. These biases, however, become problematic when they lead to unfair or discriminatory treatment – certain groups being surveilled more closely, censored more frequently, or punished more harshly. In the context of human rights in the online environment, this matters because everyone has a right to equal access to privacy, to free speech, and to justice.
States are the actors that are responsible for respecting and protecting their citizens’ human rights. Typically, representatives of a state (such as social workers, judges, police and parole officers) are responsible for making decisions that can impact citizens’ rights: working out the amount of benefits that a person will receive, deciding on the length of a prison sentence, or making a prediction about the likelihood of them re-offending. Increasingly, however, these decisions are starting to be made by algorithms.
Many well-meaning people have fallen into the trap of thinking that tech, with its structured 1s and 0s, removes humans’ messy bias, and allows us to make better, fairer decisions. Yet technology is made by humans, and we unconsciously build our world views into the technology that we produce. This encodes and amplifies underlying biases, whilst outwardly giving the appearance of being “neutral”. Even the data that is used to train algorithms or to make decisions reflects a particular social history. And if that history is racist, or sexist, or ableist? You guessed it: this past discrimination will continue to impact the decisions that are made today.
The decisions made by social workers, police and judges are, of course, frequently difficult, imperfect, and susceptible to human bias too. But they are made by state representatives with an awareness of the social context of their decision, and crucially, an ability to be challenged by the impacted citizen – and overturned if an appropriate authority feels they have judged incorrectly. Humans also have a nifty way of being able to learn from mistakes so that they do not repeat them in the future. Machines making these decisions do not “learn” in the same way as humans: they “learn” to get more precise with their bias, and they lack the self-awareness to know that it leads to discrimination. To make things worse, many algorithms that are used for public services are currently protected under intellectual property laws. This means that citizens do not have a route to challenge decisions that an algorithm has made about them. Recent cases such as Loomis v. Wisconsin, which saw a citizen challenge a prison sentence determined by the US’s COMPAS algorithm, have worryingly ruled in favour of upholding the algorithm’s proprietary protections, refusing to reveal how the sentencing decision was made.
Technology is not just a tool, but a social product. It is not intrinsically good or bad, but it is embedded with the views and biases of its makers. It uses flawed data to make assumptions about who you are, which can impact the world that you see. Another example of this is the use of highly personalised adverts in the EU, which may breach our fundamental right to privacy. Technology cannot – at least for now – make fair decisions that require judgement or assessment of human qualities. When it comes to granting or denying access to services and rights, this is even more important. Humans can be aware of their bias, work towards mitigating it, and challenge it when they see it in others. For anyone creating, buying or using algorithms, active measures about how the tech will impact social justice and human rights must be at the heart of design and use.
The Danish police and the Ministry of Justice consider access to electronic communications data to be a crucial tool for investigation and prosecution of criminal offences. Legal requirements for blanket data retention, which originally transposed the EU Data Retention Directive, are still in place in Denmark, despite the judgments from the Court of Justice of the European Union (CJEU) in 2014 and 2016 that declared general and indiscriminate data retention illegal under EU law.
In March 2017, in the aftermath of the Tele2 judgment, the Danish Minister of Justice informed the Parliament that it was necessary to amend the Danish data retention law. However, when it comes to illegal data retention, the political willingness to uphold the rule of law seems to be low – every year the revision is postponed by the Danish government with consent from Parliament, citing various formal excuses. Currently, the Danish government is officially hoping that the CJEU will revise the jurisprudence of the Tele2 judgment in the new data retention cases from Belgium, France and the United Kingdom which are expected to be decided in May 2020. This latest postponement, announced on 1 October 2019, barely caught any media attention.
However, data retention has been almost constantly in the news for other reasons since 17 June 2019 when it was revealed to the public that flawed electronic communications data had been used as evidence in up to 10000 police investigations and criminal trials since 2012. Quickly dubbed the “telecommunications data scandal” by the media, the ramifications of the case have revealed severely inadequate data management practices by the Danish police for almost ten years. This is obviously very concerning for the functioning of the criminal justice system and the right to a fair trial, but also rather surprising in light of the consistent official position of the Danish police that access to telecommunications data is a crucial tool for investigation of criminal offences. The mismatch between the public claims of access to telecommunications data being crucial, and the attention devoted to proper data management, could hardly be any bigger.
According to the initial reports in June 2019, the flawed data was caused by an IT system used by the Danish police to convert telecommunications data from different mobile service providers to a common format. Apparently, the IT system sometimes discarded parts of the data received from mobile service providers. During the Summer of 2019, a new source of error was identified. In some cases, the data conversion system had modified the geolocation position of mobile towers by up to 200 meters.
Based on the new information of involuntary evidence tampering, the Director of Public Prosecutions decided on 18 August 2019 to impose a temporary two-month ban on the use of telecommunications data as evidence in criminal trials and pre-trial detention cases. Somewhat inconsequential, the police could still use the potentially flawed data for investigative purposes. Since telecommunications data are frequently used in criminal trials in Denmark, for example as evidence that the indicted person was in the vicinity of the crime scene, the two-month moratorium caused a number of criminal trials to be postponed. Furthermore, about 30 persons were released from pre-trial detention, something that generated media attention even outside Denmark.
In late August 2019, the Danish National Police commissioned the consultancy firm Deloitte to conduct an external investigation of its handling of telecommunications data and to provide recommendations for improving the data management practices. The report from Deloitte was published on 3 October 2019, together with statements from the Danish National Police, the Director of Public Prosecutions, and the Ministry of Justice.
The first part of the report identifies the main technical and organisational causes for the flawed data. The IT system used for converting telecommunications data to a common format contained a timer which sometimes submitted the converted data to the police investigator before the conversion job was completed. This explains, at least at technical level, why parts of the data received from mobile service providers were sometimes discarded. The timer error mainly affected large data sets, such as mobile tower dumps (information about all mobile devices in a certain geographical area and time period) and access to historical location data for individual subscribers.
The flaws in the geolocation information for mobile towers that triggered the August moratorium were traced to errors in the conversion of geographical coordinates. Mobile service providers in Denmark use two different systems for geographical coordinates, and the police uses a third system internally. During a short period in 2016, the conversion algorithm was applied twice to some mobile tower data, which moved the geolocation positions by a couple of hundred meters.
On the face of it, these errors in the IT system should be relatively straightforward to correct, but the Deloitte report also identifies more fundamental deficiencies in the police practices of handling telecommunications data. In short, the report describes the IT systems and the associated IT infrastructure as complex, outdated, and difficult to maintain. The IT system used for converting telecommunications data was developed internally by the police and maintained by a single employee. Before December 2018, there were no administrative practices for quality control of the data conversion system, not even simple checks to ensure that the entire data set received from mobile service providers had been properly converted.
The only viable solution for the Danish police, according to the assessment in the report, is to develop an entirely new infrastructure for handling telecommunications data. Deloitte recommends that the new infrastructure should be based on standard software elements which are accepted globally, rather than internally developed systems which cannot be verified. Concretely, the reports suggests using POL-INTEL, a big data policing system supplied by Palantir Technologies, for the new IT infrastructure. In the short term, some investment in the existing infrastructure will be necessary in order to improve the stability of the legacy IT systems and reduce the risk of creating new data flaws. Finally, the report recommends systematic independent quality control and data validation by an external vendor. The Danish National Police has accepted all recommendations in the report.
Deloitte also delivered a short briefing note about the use of telecommunications data in criminal cases. The briefing note, intended for police investigators, prosecutors, defence lawyers and judges, explains the basic use cases of telecommunications data in police investigations, as well as information about how the data is generated in mobile networks. The possible uncertainties and limitations of telecommunications data are also mentioned. For example, it is pointed out that mobile devices do not necessarily connect to the nearest mobile tower, so it cannot simply be assumed that the user of the device is close to the mobile tower with almost “GPS level” accuracy. This addresses a frequent critique against the police and prosecutors for overstating the accuracy of mobile location data – an issue that was covered in depth by the newspaper Information in a series of articles in 2015. Quite interestingly, the briefing note also mentions the possibility of spoofing telephone numbers, so that the incoming telephone call or text message may originate from a different source than the telephone number registered by the mobile service provider under its data retention obligation.
On 16 October 2019, the Director of Public Prosecutions decided not to extend the moratorium on the use of telecommunications data. Along with this decision, the Director issued new and more specific instructions for prosecutors regarding the use of telecommunications data. The Deloitte briefing note should be part of the criminal case (and distributed to the defence lawyer), and police investigators are required to present a quality control report to prosecutors with an assessment of possible sources of error and uncertainty in the interpretation of the telecommunications data used in the case. Documentation of telecommunications data evidence should, to the extent possible, be based on the raw data received from mobile service providers and not the converted data.
For law enforcement, the October 16 decision marks the end of the data retention crisis which erupted in public four months earlier. However, only the most imminent problems at the technical level have really been addressed, and several of the underlying causes of the crisis are still looming under the surface, for example the severely inadequate IT infrastructure used by the Danish police for handling telecommunications data. The Minister of Justice has announced further initiatives, including investment in new IT systems, organisational changes to improve the focus on data management, improved training for police investigators in the proper use and interpretation of telecommunications data, and the creation of a new independent supervisory authority for technical investigation methods used by the police.
In 2018, the Portuguese telecom regulator ANACOM told the three major Portuguese mobile Internet Service Providers (ISPs) to change offers that were in breach of EU net neutrality rules. Among other things, the regulator recommended that ISPs publish their terms and conditions, and increase the data volume of their mobile data packs in order to bring it closer to their zero-rating offer. In Portugal, average mobile data volumes are small, yet among the most expensive in Europe. ANACOM’s net neutrality report that was published in June 2019 reveals how the ISPs reacted to the regulator’s intervention.
While operators have complied with ANACOM’s decision on differential treatment of traffic after the general data ceiling has been exhausted, that was as far as they went. Regarding the increase of data volume, all three major operators simply ignored ANACOM’s demand. None of them changed their offers. One of the operators claimed, instead, that “the current ceiling is adjusted to the demand”.
Then, ANACOM had asked the ISPs to publish the terms and conditions under which other companies and their applications can be included in the their zero-rating packages. The result: All operators ignored this recommendation, too.
Surprisingly, the regulator’s reaction was lukewarm, at best. Instead of strongly criticising the ISPs for not complying to its recommendations, it stated that it “will continue to monitor all matters concerning these recommendations”, and that this will be followed up with “further analysis in the context of net neutrality […]”.
Portuguese EDRi observer D3 Defesa dos Direitos Digitais regrets the lack of will and courage on the part of ANACOM to put an end to the harmful practices of ISPs. Zero-rating harms consumers and free competition by tilting the playing field in favour of a few selected, dominant applications, and it constitutes a threat to a free and neutral internet. By not acting against price discrimination practices between applications and restricting its action to technical discrimination of traffic, ANACOM shows no intention to act on the underlying problem of zero-rating offers.
The result is that in Portugal, mobile data volumes are on average small, and the prices are among the highest in Europe. Users suffer from an over-concentrated market – three major ISPs share 98% of the market. In this setting, the leading companies can afford to ignore the regulator’s public recommendations without practical consequences. The legislator has not introduced the fines for net neutrality infringements that are mandatory under EU law since 2015.
When Facebook CEO Marc Zuckerberg was grilled by Representative Alexandria Ocasio-Cortez in a hearing of the United States House Committee on Financial Services on 23 October, he admitted that if Republicans would pay for spreading a lie on their services, it would probably not be prohibited. Political advertisements are not subjected to any fact-checking review which could theoretically lead to the refusal or the blocking of this promoted content. According to Zuckerberg’s vision, if a politician lies, an open public debate helps exposing these lies and the electorate holds the politician accountable by rejecting her or his ideas. The principle of free speech departs from this very idea that all statements should be debated, and the bad ones would be naturally put aside. The only problem is that neither Facebook nor Twitter provides an infrastructure for such an open public debate.
These companies do not display content in a neutral and universal way to everybody. What one sees reflects what their personal data have been revealing about their life, preferences and habits. Information is broadcast to each user in a selective, narrowly defined manner, in line with what the algorithms have concluded about that person’s past online activity. Hence, so-called “filter bubbles”, combined with human inclination for confirmation bias, capture individuals in restricted information environments. These prevent people from forming opinions based on diversified sources of information – a core principle of open public debate.
Some parties in this discussion would like to officially acknowledge the critical infrastructure status dominant social media have in our societies, considering their platforms as the new place where the public sphere is taking place. This would imply applying to social media platforms the existing laws on TV channels and radio broadcasters that require them to carry certain types of content and to exclude others. Considering the amount of content posted every minute of each of those platforms, the recourse to automatic filtering measures would be inevitable. This would also cement their power over people’s speech and thoughts.
Banning political ads is a positive step towards reducing the harm caused by the amplification of false information. However, this measure is still missing the point: the most crucial problem is micro-targeting. Banning political ads is unlikely to stop micro-targeting, since that‘s the business model of all the main social media companies, including Twitter.
The first step of micro-targeting is profiling. Profiling consists of collecting as much data as possible on each user to build behavioural tracking profiles – it was proven that Facebook has expanded this collection to even those who aren’t using their platform. Profiling is enabled by keeping the user trapped on the platform and inciting as much attention and “engagement” as possible. The “attention economy” relies on content that keep us scrolling, commenting and clicking. Which content does the job is predicted based on our tracking profiles. Usually it’s offensive, shocking and polarizing content. This is why political content is one of the most effective at maximizing profits. No need for it to be paid for.
Twitter CEO Jack Dorsey is right in affirming that this is not a freedom of expression issue, but rather an outreach question, to which no fundamental right exists. To the contrary, rights to data protection and to privacy are human rights, and it is high time for the European Union to substantiate them against harmful profiling practices. A step towards that would be to adopt a strong ePrivacy Regulation. This piece of legislation would reinforce the safeguards the General Data Protection Regulation (GDPR) introduced. It would ensure that privacy by design and by default are guaranteed. Finally, it would tackle the perversive model of online tracking.
On 29 August 2019, the much awaited new Greek data protection law came into force. Τhis law (4624/2019), implements both the provisions of the EU Law Enforcement Directive (LED, 2016/680) and the General Data Protection Regulation (GDPR) into national level. However, since the first days after the law was adopted, a lot of criticism was voiced concerning the lack of conformity of its provisions with the GDPR.
The Greek data protection law was adopted following the Εuropean Commission’s decision of July 2019 to refer Greece to the Court of Justice of the European Union (CJEU) for not transposing the LED on time. Thus, the national authorities acted fast in order to adopt a new data protection law. Unfortunately, the process was rushed through. As a result, the new data protection law suffers from important shortcomings and includes Articles that are challenging the provisions of the LED or even the GDPR.
In September 2019, Greek EDRi observer Homo Digitalis, together with a Greek consumer protection organisation EKPIZO, sent a common request to the Hellenic data protection authority (DPA) asking it to issue an Opinion on the conformity of the Greek law with the provisions of the LED and the GDPR. The DPA issued a press statement in early October 2019 announcing that it will come up with an Opinion in due time. Moreover, on 24 October 2019; Homo Digitalis filed a new complaint to the European Commission regarding the provisions of the Greek data protection law that are challenging the EU data protection regime.
Professor Mitrou states that, on the positive side, the Greek legislator has introduced further limitations to the processing of sensitive data (genetic data, biometric data or data concerning health). Thus, according to the Article 23 of the new Greek law, the processing of genetic data for health and life insurances is expressly prohibited. “In this respect the Greek law, by stipulating prohibition on the use of genetic findings in the sphere of insurance, precludes the risk of results of genetic diagnosis being used to discriminate against people,” she says.
However, a strong point of criticism relates to the provisions concerning the purpose alienation. The Greek law introduces very wide and vague exceptions from the purpose limitation principle that prohibits the further use of data for incompatible purposes. “For example, private entities are allowed to process personal data for preventing threats against national or public security upon request of a public entity. Serious concerns are raised also with regard to the limitations of the data subjects’ rights,” Professor Mitrou points out.
She reminds that the Greek legislator “has made extensive use of the limitations permitted by Article 23 of the GDPR to restrict the right to information, the right to access and the right to rectification and erasure”. However, these restrictions have been adopted without fully complying with the safeguards provided in Article 23, para 2 GDPR. Moreover, the Greek law introduces provisions that allow the data controller not to erase data upon request of the data subject, in case the controller has reason to believe that erasure would adversely affect legitimate interests of the data subject. Thus, the data controller is allowed by the Greek legislator to substitute the will of the data subject.
“The Greek law has not respected the GDPR as standard borderline and has (mis)used ‘opening clauses’ and Member State discretion not to enhance but to reduce the level of data protection,” Professor Mitrou concludes.
On 21 October, David Kaye – UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression – released the preliminary findings of his sixth report on information and communication technology. They include tangible suggestions to internet companies and states whose current efforts to control hate speech online are failing to comply with the fundamental principles of human rights. The EU Commission should consider Kaye’s recommendations when creating new rules for the internet and – most importantly – when drafting the Digital Services Act (DSA).
The “Report of the Special Rapporteur to the General Assembly on online hate speech” (docx) draws on international legal instruments on civil, political and non-discrimination rights to show how human rights law already provides a robust framework for tackling hate speech online. The report offers an incisive critique of platform business models which, supported by States, profit from the spread of “hateful content” whilst violating free expression by wantonly deleting legal content. Instead, Kaye offers a blueprint for tackling hate speech in a way which empowers citizens, protects online freedom, and puts the burden of proof on States, not users. Whilst the report outlines a general approach, the European Commission should incorporate Kaye’s advice when developing the proposed Digital Services Act (DSA) and other related legislation and non-legal initiatives, to ensure that the regulation of hate speech does not inadvertently violate citizens’ digital rights.
Harmful content removal: under international law, there is a better way
Sexism, racism and other forms of hate speech (which Kaye defines as “incitement to discrimination, hostility or violence”) in the online environment are quite rightly areas of attention for global digital policy and law makers. But the report offers a much-needed reminder that restricting freedom of expression online through deleting content is not just an ineffective solution, but in fact threatens a multitude of rights and freedoms that are vital for the functioning of democratic societies. Freedom of expression is, as Kaye states, “fundamental to the enjoyment of all human rights”. If curtailed, it can open the door for repressive States to systematically suppress their citizens. Kaye gives the example of blasphemy laws: profanity, whilst offensive, must be protected – otherwise it can be used to punish and silence citizens that do not conform to a particular religion. And others such as journalist Glenn Greenwald have already pointed out in the past how “hate speech” legislation is used in the EU to suppress left-wing viewpoints.
Fundamental rules for restricting freedom of expression online
The report is clear that restrictions of online speech “must be exceptional, subject to narrow conditions and strict oversight”, with the burden of proof “on the authority restricting speech to justify the restriction”. Any restriction is thus subject to three criteria under human rights law:
Firstly under the legality criteria, Kaye uses human rights law to show that any regulation of hate speech online (as offline) must be genuinely unlawful, not just offensive or harmful. It must be regulated in a way that does not give “excessive discretion” to governments or private actors, and gives independent routes of appeal to impacted individuals. Conversely, the current situation gives de facto regulatory power to internet companies by allowing (and even pressuring) them to act as the arbiters of what does and does not constitute free speech. Coupled with error-prone automated filters and short takedown periods incentivising over-removal of content, this is a free speech crisis in motion.
Secondly on the question of legitimacy, the report outlines the requirement for online hate speech laws and policies to be treated in the same way as any other speech. This means ensuring that freedom of expression is restricted only for legitimate interests, and not curtailed for “illegitimate purposes” like suppressing criticism of States. Potential illegal suppression is enabled by overly broad definitions of hate speech, which can act as a catch-all for content that States find offensive, despite being legal. A lack of strict definitions in the counter-terrorism policy field has already had a strong impact on freedom of expression in Spain, for example. “National security” was proven to be abusively invoked to justify measures interfering in human rights, and used as a pretext to adopt vague and arbitrary limitations.
Lastly, necessity and proportionality are violated by current moderation practices including “nearly immediate takedown” requirements and automatic filters which clumsily censor legal content, becoming collateral damage in a war against hate speech. This violates rights to due process and redress, and unnecessarily puts the burden of justifying content on users. Worryingly, Kaye continues that “such filters disproportionately harm historically under-represented communities.”
A rational approach to tackling hate speech online
The report offers a wide range of solutions for tackling hate speech whilst avoiding content deletion or internet shutdowns. Guided by human rights documents including the so-called “Ruggie Principles” (the 2011 UN Guiding Principles on Business and Human Rights), the report emphasises that internet companies need to exercise a greater degree of human rights due diligence. This includes transparent review processes, human rights impact assessments, clear routes of appeal and human, rather than algorithmic, decision-making. Crucially, Kaye calls on internet platforms to “de-monetiz[e] harmful content” in order to counteract the business models that profit from viral, provocative, harmful content. He stresses that the biggest internet companies must bear the cost of developing solutions, and share them with smaller companies to ensure that fair competition is protected.
The report is also clear that States must take more responsibility, working in collaboration with the public to put in place clear laws and standards for internet companies, educational measures, and remedies (both judicial and non-judicial) in line with international human rights law. In particular, they must take care when developing intermediary liability laws to ensure that internet companies are not forced to delete legal content.
The report gives powerful lessons for the future DSA and other related policy initiatives. In the protection of fundamental human rights, we must limit content deletion (especially automated) and avoid measures that make internet companies de facto regulators: they are not – and nor would we want them to be – human rights decision-makers. We must take the burden of proof away from citizens, and create transparent routes for redress. Finally, we must remember that the human rights rules of the offline world apply just as strongly online.
The 8th annual Privacy Camp will take place in Brussels on 21 January 2020.
With the focus on “Technology and Activism”, Privacy Camp 2020 will explore the significant role digital technology plays in activism, enabling people to bypass traditional power structures and fostering new forms of civil disobedience, but also enhancing the surveillance power of repressive regimes. Together with activists and scholars working at the intersection of technology and activism, this event will cover a broad range of topics from surveillance and censorship to civic participation in policy-making and more.
The call for panels invites classical panel submissions, but also interactive formats such as workshops. We have particular interest in providing space for discussions on and around social media and political dissent, hacktivism and civil disobedience, the critical public sphere, data justice and data activism, as well as commons, peer production, and platform cooperativism, and citizen science. The deadline for proposing a panel or a workshop is 10 November 2019.
In addition to traditional panel and workshop sessions, this year’s Privacy Camp invites critical makers to join the debate on technology and activism. We are hosting a Critical Makers Faire for counterculture and DIY artists and makers involved in activism. The Faire will provide a space to feature projects such as biohacking, wearables, bots, glitch art, and much more. The deadline for submissions to the Makers Faire is 30 November.
Privacy Camp is an annual event that brings together digital rights advocates, NGOs, activists, academics and policy-makers from Europe and beyond to discuss the most pressing issues facing human rights online. It is jointly organised by European Digital Rights (EDRi), Research Group on Law, Science, Technology & Society at Vrije Universiteit Brussel (LSTS-VUB), the Institute for European Studies at Université Saint-Louis – Bruxelles (USL-B), and Privacy Salon.
Privacy Camp 2020 takes place on 21 January 2020 in Brussels, Belgium. Participation is free and registrations open in December.
The Body of European Regulators for Electronic Communications (BEREC) is currently in the process of overhauling their guidelines on the implementation of the Regulation (EU) 2015/2120, which forms the legal basis of the EU’s net neutrality rules. At its most recent plenary, BEREC produced new draft guidelines and opened a public consultation on this draft. The proposed changes to the guidelines seem like a mixed bag
5G network slicing
The new mobile network standard 5G specifies the ability of network operators to provide multiple virtual networks (“slices”) with different quality characteristics over the same network infrastructure, called “network slicing”. Because end-user equipment can be connected to multiple slices at the same time, providers could use the introduction of 5G to create new products where different applications make use of different slices with their associated quality levels. In its draft guidelines, BEREC clarifies that it‘s the user who has to be able to choose which application makes use of which slice. This is a welcome addition.
Zero-rating is a practice of billing the traffic used by different applications differently, and in particular not deducting the traffic created by certain applications from a user’s available data volume. This pratice has been criticised, because it reduces the choice of consumers regarding which applications they can use, and disadvantages new, small application providers against the big, already established players. These offers broadly come in two types: “open” zero-rating offers, where application providers can apply to become part of the programme and have their application zero-rated, and “closed” offers where that is not the case. The draft outlines specific criteria according to which open offers can be assessed.
Parental control filters
While content- and application-specific pricing is an additional challenge for small content and application providers, content-specific blocking can create even greater problems. Nevertheless, the draft contains new language that creates a carve-out for products such as parental control filters operated by the access provider from the provisions of the Regulation that prohibit such blocking, instead subjecting them to a case-by-case assessment by the regulators (as is the case for zero-rating). The language does not clearly exclude filters that are sold in conjunction with the access product and are on by default, and the rules can even be read as to require users who do not want to be subjected to the filtering to manually reconfigure each of their devices.
Deep Packet Inspection
Additionally, BEREC is also running a consultation on two paragraphs in the guidelines to which it hasn‘t yet proposed any changes. These paragraphs establish important privacy protections for end-users. They prohibit access providers from using Deep Packet Inspection (DPI) when applying traffic management measures in their network and thus protect users from having the content of their communications inspected. However, according to statements made during the debriefing session of the latest BEREC plenary, some actors want to allow providers to look at domain names, which themselves can reveal very sensitive information about the user and require DPI to extract from the data stream.
EDRi member epicenter.works will respond to BEREC’s consultation and encourages other stakeholders to participate. The proposed changes are significant. That is why clearer language is required, and users‘ privacy needs to remain protected. The consultation period ends on 28 November 2019.
Austrian EDRi member epicenter.works filed a complaint with the Austrian data protection authority (DPA) about the Passenger Name Records (PNR) in August 2019, with the aim to overturn the EU PNR Directive. On 6 September, the DPA rejected the complaint, which was a good news, because that was the only way to lodge a complaint to the Federal Administrative Court.
The complaint: Objections
Epicenter.works’ complaint about the PNR system to the Federal Administrative Court contains a number of objections. The largest and most central one concerns the entire PNR Directive itself. The Court of Justice of the European Union (CJEU) has already repeatedly declared similar mass surveillance measures to be contrary to fundamental rights, for example in the case of data retention or in the expert opinion on the PNR agreement with Canada.
A complaint can’t be directly lodged to the CJEU, but the Administrative Court must submit questions on the interpretation of the law to the CJEU, as epicenter.works suggested in the complaint. The first question suggested is summarised as follows: “Does the PNR Directive contradict the fundamental rights of the EU?”
Moreover, Austria has not correctly implemented the PNR Directive, has partially extended its application, and has not implemented important restrictions from the Directive. For example, the Directive obliges all automatic hits, for example when someone is identified as a potential terrorist, to be checked by a person. This has not been implemented in the Austrian PNR Act. The question to the CJEU proposed in the complaint is therefore: “If the PNR Directive is valid in principle, is the processing of PNR data permitted even though the automatic hits do not have to be checked by a person?”
Where the Austrian PNR Act goes beyond the Directive, epicenter.works suggests that the Court should request the Constitutional Court to repeal certain provisions.
The Austrian PNR Act goes further than the Directive
According to the PNR Directive, PNR data may only be processed for the purpose of prosecuting terrorist offences and certain serious criminal offences. These serious crimes are listed in an annex to the Austrian PNR Act, which are directly translated from the PNR Directive. However, some of these crimes do not have an equivalent crime in Austrian law, leaving the entire provision unclear. Because of this flaw, the complaint asks the Constitutional Court to repeal this provision of the PNR Act. The list of terrorist offences in the PNR Act also goes much further than the Directive.
The PNR Directive only requires EU Member States to record flights to or from third countries, leaving the recording of intra-EU flights optional for Member States. Many countries have also extended this to domestic flights. In Austria, the Minister of the Interior can do this by decree without giving any specific reason. The complaint suggests that the Constitutional Court should delete this provision, because it has a strong impact on the fundamental rights of millions of people — without any justification of its necessity or proportionality.
Finally, the PNR Act also provides for the possibility for customs authorities and even the military to have access to PNR data. This is neither provided for in the PNR Directive, nor necessary for the prosecution of alleged terrorist and those suspected of serious crimes, and therefore it’s an excessive measure. Here, too, the complaint suggests that the Constitutional Court should delete the provisions that give these authorities access to PNR data.