On 30 May 2019, EDRi observer Homo Digitalis filed a complaint to the European Commission against a breach of EU data protection law by Greece. The European Commission registered the complaint under the reference number CHAP(2019)01564 on 6 June 2019, and its services will assess the complaint and provide a reply within 12 months.
Homo Digitalis claims that Greece has breached Article 63, paragraph 1 of the Directive 2016/680, also knows as the Law Enforcement Directive (LED). According to this Article, Member States shall adopt and publish by 6 May 2018, the laws, regulations and administrative provisions necessary to comply with the LED. However, the Greek State has not published any national law in this regard, and more than one year after the above-mentioned deadline, it has not applied any related provisions.
The provisions of the LED are intended to cover all personal data processing undertaken for law enforcement (police and criminal justice) purposes, regardless of whether the processing takes place within or across national borders. In this way, the Framework Decision 2008/977/JHA’s most basic restriction is finally lifted and law enforcement authorities within the EU have to implement the LED’s provisions into their everyday personal data processing activities. Therefore, a Greek national law implementing the provisions of the LED is crucial for ensuring a consistent and high level of protection of people’s data when those are processed for the prevention, investigation, detection, and prosecution of criminal offenses.
Since Greece has not respected the deadline that the EU regulator has set, it fails to meet EU requirements related to the strengthening of the rights of data subjects and of the obligations of those who process personal data. It also fails to provide equivalent powers for monitoring and ensuring compliance with the data protection rules in Greece. Before submitting the complaint, Homo Digitalis had proceeded to a number of actions at national level.
In the complaint, Homo Digitalis also underlines shortcomings related to the enforcement of the General Data Protection Regulation (GDPR). Despite the fact that the provisions of the GDPR are binding in their entirety and directly applicable in all Member States since 25 May 2018, the Greek State has not published a national law enforcing GDPR’s provisions in national law until today. This is very troublesome, especially considering that EU legislators have left many important measures to the discretion of the Members States, such as rules regulating the processing of genetic data, biometric data or data concerning health (Article 9); or the protection of employees’ personal data in the context of employment (Article 88), for example.
The German Big Brother Awards (BBA) gala was held on 8 June 2019 in Bielefeld, Germany. Organised annually since 2000 by EDRi member Digitalcourage, this year’s gala was the third to be streamed live in English in addition to the original German. For the second time, the venue was Bielefeld theatre, where the stage set for an operetta that had premiered the previous day was adapted for the presentation.
The award in the “Authorities and Administration” category was given to the Interior Minister of the Federal State of Hesse, Peter Beuth. After the tightening of the Hessian police laws had earned Hesse’s conservative–green coalition an award in 2018, this marked the first time that two successive awards had gone to the same governing coalition of the same Federal State. The 2019 award was given for the acquisition of a software from Palantir, a controversial US company with close links to the Central Intelligence Agency (CIA), to analyse and interrelate data from various sources ranging from police databases to social media. The use of this software for “preventive” police work, which was given its own legal basis in a late addition to the police law of 2018, was criticised as having disastrous effects on human rights and on the rule of law. By commissioning Palantir to supply, adapt and operate the software, the US company was given access to sensitive police data on non-US citizens which they, pursuant to the Foreign Intelligence Surveillance Act (FISA), have to share with US secret service if warranted. On top of that, the software’s algorithms are a trade secret, so any “police findings” it may come up with are beyond effective scrutiny. In the speech, laudator Rolf Gössner also raised questions about the guarded manner in which Palantir’s services were commissioned.
No 2019 award was given in the traditional “Workplace” category. Jury member Peter Wedde was interviewed on stage and explained that there are still many prizeworthy issues in the workplace, but that reporting is a problem: either the media consider an individual complaint to be too small or alarm systems inside a business prevent “big” issues from being reported externally. Other problems are that workers representation is often not established in companies and violations often escape being penalised. He described details from a number of individual cases and called for improved whistleblowing protections.
An award in the “Biotechnology” category went to the largest provider of consumer DNA testing worldwide, ancestry.com. Laudator Thilo Weichert challenged claims in the company’s terms that consumers who supply DNA samples and accompanying information on themselves and their families would retain data ownership. The data protection statement grants Ancestry the right to share data with a broad range of partners, while sample suppliers are denied information on the research these partners conduct with their data. Conversely, consumers are barred by Ancestry from sharing the results of “their” analyses with others. Among the companies buying data that consumers supply to Ancestry are large pharmaceuticals such as GlaxoSmithKline. Ancestry was criticised for not informing its customers about other risks such as becoming the focus of police searches for suspects using DNA data, or possible family disruptions and psychological consequences, and for ignoring German legal requirements on warning about such consequences, on information disclosure as well as data protection.
The winner in the “Communication” category was Precire Technologies (formerly called Psyware) of Aachen, Germany. This company offers artifical intelligence (AI) based analysis of speech including recorded phone conversations, from which it claims to be able to derive psychological profiles. One service offered by Precire is pre-selection of job applicants by encouraging them to take part in computer-led, seemingly innocuous phone conversations. In her award speech, laudator Rena Tangens expressed doubt on the validity of such judgements, pointing to contradictions in the company’s arguments (for example that speech patterns are likened to immutable finger prints, whereas Precire also offers a speech training app), and criticised that Precire refuses to publish its own studies on its technologies, while publicly available studies are only based on data provided by the company itself. People who are not looking for jobs can still be exposed to the company’s technology when their phone calls are handled by call centres, where the software advises agents on how to treat a case based on an analysis of the caller’s emotions.
The ”Technology” award went to the “Technical Committee CYBER” of the European Telecommunications Standards Institute (ETSI) for its efforts to establish an alternative to the newest version of the internet encryption standard “Transport Layer Security” (TLS 1.3) under the name “Enterprise Transport Security” (ETS, formerly eTLS). Laudator Frank Rosengart described this standard as clearly being designed in the interests of government and secret service surveillance as it includes key escrow, a process where “backdoor” keys are retained and possibly handed over to investigators. Anyone obtaining such keys would be able to decrypt all future communications with an online service. Users, on the other hand, will have little opportunity to detect that this weaker encryption is being used or to prevent it. The standard was created despite warnings by the Internet Engineering Task Force (IETF) and other experts.
A much-debated award in the “Consumer Protection” category was given to a leading German news website, Zeit Online, for the use of tracking in its website, for the previous use of Google online services to store personal data including details on political opinions about users of an award-winning project called “Germany Talks”, and for accepting Google sponsorship for the international successor project, “My Country Talks”. Laudator padeluun explained that he has been friends with the paper’s online editor-in-chief, Jochen Wegner, for many years. He praised Zeit’s journalism as well as the project “Germany Talks”, which brought people of conflicting political opinions together for personal conversations. Despite the good intentions, the speech explained, the project had in its implementation phase succumbed to the temptation of using Google’s Cloud Office to handle registrations, and Zeit had also used these services for collaborative work on other journalistic investigations. The award speech called on Zeit Online to realise the consequences of mass surveillance that it had extensively reported on after the Snowden revelations, and to consequently expand its own IT base and look for alternatives to using online tracking as a source of revenue for its online services. Four days before the gala, Jochen Wegner had made the award public in a response on Zeit’s editorial blog “Glashaus”. Digitalcourage made clear that this course of action did not constitute a breach of its journalistic embargo – Zeit had been notified as BBA awardees and like all such “winners” they were free to react. Zeit’s response acknowledged some of the critique in the award speech while refuting other aspects. Jochen Wegner also visited the gala and accepted the award in person, using the customary opportunity to voice his opinion in an on-stage interview. His appearance was acknowledged with long and respectful applause. The Big Brother Awards organisers and Zeit Online are looking to continue the conversation and hopefully reach tangible conclusions.
On 13 June 2019, the Danish football club Brøndby IFannounced that starting in July 2019, automated facial recognition (AFR) technology will be deployed at Brøndby Stadium. It will be used to identify persons that have been banned from attending Brøndby IF football matches for violations of the club’s own rules of conduct. The AFR system will use cameras that scan the public area in front of the stadium entrances, so that persons on the ban list can be ”picked out” from the crowd before reaching the entrance.
The use of AFR technology at Brøndby Stadium comes with prior approval from the Danish Data Protection Authority (DPA) which is a requirement in the Data Protection Act, as explained below. Brøndby IF is the first company to secure an approval for using AFR in Denmark.
Under the EU General Data Protection Regulation (GDPR), biometric data for the purpose of uniquely identifying a person constitutes sensitive personal data (special categories of personal data in Article 9). This covers AFR. Article 9(1) of the GDPR prohibits the processing of sensitive personal data unless one of the conditions in Article 9(2) applies. The explicit consent of the data subject [Article 9(2)(a)] is one of these conditions, and generally speaking the most relevant one for private controllers. Consent cannot be the legal basis for using AFR at a football stadium though, since consent must be voluntary.
GDPR Article 9(2)(g) allows processing of sensitive personal data if the processing is necessary for reasons of substantial public interest, on the basis of EU or Member State law, which must be proportionate to the aim pursued. The law must provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.
Based on Article 9(2)(g), the Danish GDPR supplementary provisions (“Data Protection Act”) contains a general carve-out from the prohibition of processing sensitive personal data. Section 7(4) of the Data Protection Act provides that ”the processing of data covered by Article 9(1) of the GDPR may take place if the processing is necessary for reasons of substantial public interest.” Prior authorisation from the DPA is required for controllers that are not public authorities, and this authorisation may lay down more detailed terms for the processing.
Denmark has no specific national law providing a legal basis for the use of AFR by controllers along with suitable safeguards for data subjects. However, Section 7(4) can be used to allow any processing of sensitive personal data by law, including AFR, assuming that the threshold of substantial public interest is met. The explanatory remarks of Section 7(4) state that the provision must be interpreted narrowly, but the actual scope of the open-ended derogation is left to administrative practice by public controllers and authorisation decisions by the DPA for processing by private controllers.
With the authorisation to Brøndby IF, the Danish DPA has decided that the processing with AFR to enforce a private ban list is necessary for reasons of substantial public interest, and that the processing is proportionate to the aim pursued. The logic of that decision is rather difficult to understand in the present case. AFR is one of the most invasive surveillance technologies since a large number of persons in a crowd can be identified from their biometrics (facial images) and automatically catalogued based on matches with pre-defined watch lists. At the same time, AFR is a very unreliable and inaccurate technology with known systematic biases in the form of higher error rates for certain ethnic minorities.
At Brøndby Stadium, AFR will be used to process sensitive personal data of, on average, 14000 persons per football match. The ban list currently contains only 50 persons, and there is no information available about how many of these 50 persons are actually trying to circumvent the ban and get access to Brøndby Stadium. There is also no pressing public security need for using this very invasive surveillance technology. The number of arrests by the Danish police in connection with football matches is at a record low, and rather ironically the Brøndby IF press release even highlights that there has been a positive development regarding security at Danish football matches over the last ten years. This evidence must, at the very least, call into question the proportionality of using AFR, even before considering whether there are really reasons of substantial public interest involved.
To the Danish newspaper Berlingske, the Danish DPA commented that there is no rigid definition of ”substantial public interest”. In the application from Brøndby IF, the DPA has considered the issue of security for certain sports events with large audiences. The DPA further told Berlingske that AFR would allow for more effective enforcement of the ban list compared to manual checks, and that this could reduce the queues at the stadium entrances, lowering the risk of public unrest from impatient football fans standing in queues.
The claims for the effectiveness of AFR are contradicted by the findings of independent evaluations of the technology. A report by the UK civil liberties organisation Big Brother Watch analyses the use of AFR by the Metropolitan Police and the South Wales Police at festivals and sports events, deployments comparable to the plans of Brøndby IF. Evidence obtained from the UK police through freedom of information (FOI) requests documents that 95% of the AFR matches are false-positive identifications. Persons are ”identified” by the AFR technology without being on a watch list. The obvious conclusion is that AFR is simply not a reliable and accurate technology for identifying persons in a large crowd. The unreliability of AFR could also affect the legality of using the technology since one of the GDPR principles in Article 5(1)(d) is that personal data must be accurate. AFR matches are personal data, but very far from being accurate.
It is unclear whether the reliability of AFR, or rather the lack thereof, has played any role in the DPA decision to grant authorisation for using AFR at Brøndby Stadium. Brøndby IF seems to assume that AFR is an almost perfect technology. The press releases claims that the AFR system will not be able to identify or register persons who are not on the ban list, implicitly ruling out any false-positive identification. Needless to say, this claim is demonstrably wrong. The authorisation from the DPA does not mention accuracy of AFR, and there are no specific requirements for the controller to take measures to limit false-positive identifications or even keep track of the magnitude of this problem. The “more detailed terms” set by the DPA in the authorisation to Brøndby IF add little to the ordinary GDPR obligations for controllers.
Danish EDRi member IT-Pol publicly criticised the plans for deployment of AFR technology at Brøndby Stadium. The threshold set by the Danish DPA in terms of requirements for a substantial public interest and proportionality seems very low, and this could lead to a large number of applications for using AFR by other private controllers in Denmark. Indeed, within just two days of the Brøndby IF press release, another Danish football club (AGF) expressed an interest in using AFR at its stadium and in exchanging biometric information about persons on ban lists with Brøndby IF. Incidentally, AGF has recently installed a new video surveillance system which is able to use AFR although the AFR functionality is currently deactivated in the system. Since AFR is largely about software analysis of captured video images, there is probably a large number of modern video surveillance systems in Denmark where AFR functionality could potentially be activated, perhaps through a software upgrade.
Owing to the initiative of the Polish EDRi member Panoptykon, bank clients in Poland will have the right to receive an explanation of the assessment of their creditworthiness. The initiative proposed and fought for amendments in the Polish banking law, and resulted in an even higher standard than the one envisioned in the General Data Protection Regulation (GDPR).
There is naturally a strong asymmetry of power between banks and clients. So far that manifested itself for example in the fact that banks were able to demand their clients to present any information connected with their life situation and the purpose of the loan, as well as to obtain information from other sources. Apart from the generally binding principles of personal data protection, there were no other restrictions in that scope. In effect, the client who was denied a loan by the bank was able to only guess what the problem was – income, the form of employment, or perhaps any liabilities not paid on time. That will change: clients of Polish banks will be able to check what were the decisive factors in the assessment of their creditworthiness.
More than the GDPR
A consumer will have the right to obtain “information on the factors, including personal data, which affected the evaluation of their creditworthiness”. That right applies irrespective of whether or not a credit decision was automated and regardless of its content.
The GDPR guarantees transparency limited to automated decisions. However, in reality, the line between the assessment made by the algorithm and the final credit decision made by an analyst may be blurred. Moreover, irrespective of the degree of human involvement, a credit decision is based on an advanced analysis of personal data and on the profiling of clients. From that perspective, extending the right to explanation to all decisions based on profiling and using big data is an excellent solution.
What should the bank tell the client?
The right to explanation encompasses factors – including personal data – which affected the creditworthiness assessment. The bank does not need to provide a full list of factors taken into account in that process, but it has to disclose all those which had an impact on the final decision. It will not be enough to specify that the basis for the negative assessment was, for instance, the income. The bank will be obliged to disclose what exact amount of income it took into consideration. This creates room for dialogue and a chance to correct mistakes (such as a missing zero in the amount of income, or rectifying an outdated report from a credit information bureau). In a long-term perspective, it also serves as a valuable instruction for those clients who wish to increase their credibility towards banks. The information received may become an impulse to a timely repayment of liabilities or seeking another form of employment.
Translating law to the banking practice
The new regulations will undoubtedly strengthen the client’s position towards the bank. In relation to each automated credit decision, the client will have the GDPR rights to request rectifications, to question the decision, and to obtain human intervention. In relation to each decision issued with the participation of a bank employee, the client will also be able to use the new right and ask for specific personal data which affected the final decision. These are two independent procedures, safeguarding a high standard of transparency and data protection.
With this achievement, Panoptykon has improved to a significant extent the power inbalance between banks and their clients. This achievement could be used by human rights and consumer groups as a precedent. As we see in this case, the rights contained in the GDPR need organised action all across the EU to make the goals of the Regulation work in practice.
On 6 June 2019, representatives from eight civil society organisations (including EDRi members) met with officials from the European Commission (EC) Directorate General of Home Affairs (DG HOME) to discuss data retention. This meeting, according to the EC officials, was just another one in a series of meetings that DG HOME is holding with different stakeholders to discuss potential data retention initiatives that could be put forward (or not) by the next Commission. The meeting is not connected to the publication of the conclusions by the Council on data retention published also on 6 June which coincidentally tasks the Commission with doing a study “on possible solutions for retaining data, including the consideration of a future legislative initiative”.
Ahead of the meeting, civil society was sent a set of questions about the impact of existing and potentially new data retention legislation on individuals, how a “legal” targeted data retention could be designed, and what are the specific issues (data retention periods, geographical restrictions, and so on) that could be included in case new data retention legislation were to be proposed.
According to the Commission, there are no clear “next stages” in the process, apart from the aforementioned study that will have to be prepared after the Council conclusions on data retention published on 6 June. The Commission will, in addition to this study, continue dialogues with civil society, data protection authorities, EU Fundamental Rights Agency and Member States that will inform a potential future action (or inaction) from the EC on data retention.
What is your priority when a terrorist attack or a natural disaster takes place close to where your parents live or where your friend went on holidays? Obviously, you would immediately like to know how your loved ones are doing. You will call and text them until you get in touch.
Or, imagine that you happen to be close to an attack yourself. You have little or no information, and you see a person with weapons running down the road. You would urgently call the police, right? You try to call, but it isn’t possible to connect to the mobile network. Your apps are not working either. You can’t inform your loved ones, you can’t find information about what’s going on, and you can’t call the police. Right at the time that communication and knowledge are vital, you can’t actually do anything. Afterwards, it appears that the telecom providers switched off their mobile networks directly after the attack, obeying police orders. This measure was necessary for safety, because it was suspected that the perpetrators were using the mobile network.
This scenario isn’t that far-fetched. A few years ago the telephone network in the San Francisco underground was partially disconnected. The operator of the metro network wanted to disrupt the demonstration against police violence after such a protest disturbed the timetable. The intervention was considered justified based on the safety of passengers. As a consequence of the previous demonstrations, the platforms had become overcrowded with passengers that couldn’t continue their journeys. However, the intervention was harshly criticised as the deactivation of the phone network had endangered the passengers – because, how do you, for example, alert the emergency services in an emergency situation when nobody’s phone is working?
In Sri Lanka, Facebook is practically a synonym for “the internet” – it’s the main communication platform in the country where the practice of zero-rating flourishes. As a result of Facebook’s dominance, contents that are published on the platform can very quickly have an enormous reach. And, it is exactly the posts that capitalise fear, discontentment, and anger that have a huge potential to go viral, whether they are true or not. Facebook in itself doesn’t have an incentive to limit the impact of these posts. On the contrary: the most extreme messages are contributing to the addictive nature of the social network. The posts themselves aren’t a threat to people’s physical safety, but in the context of terrorist attacks, they can be lethal.
The distribution of false information is apparently such a huge problem that the Sri Lankan government has no other option than to disconnect the main communication platform in the country. It’s a decision with far-reaching consequences: people are being isolated from their main source of information and from the only communication tool to reach their family and friends. We find ourselves in a situation in which the harmful side-effects of such a platform are perceived to be bigger than the gigantic importance of open communication channels and provision of information – rather no communication than Facebook-communication.
This shows how dangerous it is when a society is so dependent on one online platform. This dependency also makes it easier for a government to gain control by denying access to that platform. The real challenge is to ensure a large diversity of news sources and means of communication. In the era of information, dependency on one dominant source of information can be life-threatening.
Czech EDRi member Iuridicum Remedium (IuRe) has fought for 14 years against Czech implementation of the controversial EU data retention Directive which was declared invalid by the Court of Justice of the European Union (CJEU). After years of campaigning and many hard legislative battles, the fight has finally come to an end: on 22 May 2019, the Czech Constitutional Court rejected IuRe’s proposal to declare the Czech data retention law unconstitutional. The court ended up rejecting the claim, despite it being supported by 58 deputies of the parliament across the political spectrum.
In the Czech Republic, data retention legislation was first adopted in 2005. In March 2011, the Constitutional Court upheld first IuRe’s complaint on original data retention legislation and canceled it. In 2012, however, a new legal framework was adopted to implement the EU Data Retention Directive – that the CJEU found to contravene European law in Digital Rights Ireland case in 2014, and to comply with the Constitutional Court’s decision. This new legislation contained still problematic general and indiscriminate data retention and a number of sub-problems. Therefore, even in the light of CJEU’s decisions, IuRe decided to prepare a new constitutional complaint.
IuRe originally submitted a complaint to challenge the very principle of bulk data retention as massive collection and storage of data of people, without any link to the individual suspicion in criminal activities, extraordinary events, or terrorist threats. The CJEU already declared this general and indiscriminate data retention principle inadmissible in two of its decisions (Digital Rights Ireland and Tele2). Although the Czech Constitutional Court refers to both judgments several times, their conclusions – especially when it comes to analyse the foundations of why data retention is not in line with the Czech Constitution – does not deal with it properly.
The Constitutional Court’s main argument to declare data retention constitutional is that as communications increasingly occur in the digital domain, so does crime. Even though this could be true,it is regrettable that the Constitutional Court did not further develop this reasoning and argued why this is in itself a basis for bulk data retention. The Court also ignored that greater use of electronic communication also implies greater interference with privacy that is associated with general data retention.
The Court further argued that personal data, even without an obligation to retain it, are kept in any case for other purposes, such as invoicing for services, answering to claims and behavioral advertising. In the Court’s opinion, the fact that people give operators their “consent” to process their personal data reinforces the argument to claim that data retention is legal and acceptable. Unfortunately, the Constitutional Court does not take into consideration that the volume, retention period and sensitivity of personal data held by operators for other purposes is quite different from the obligatory data retention prescribed by the Czech data retention law. Furthermore, the fact that operators need to keep some data already (for billing purposes for example) shows that police would not be completely left in the dark without a legal obligation to store data.
In addition to the proportionality of data retention, which has not been clarified by the Court, another issue is how “effective” data retention is to reduce crime. Statistics from 2010 to 2014 show that there was no significant increase in crime or reduction of the crime detection in the Czech Republic after the Constitutional Court abolished the obligation to retain data in 2011. Police statistics presented to the Court that data retention is not helping to combat crime in general, nor facilitating investigation of serious crimes (such as murders) or other types of crimes (such as frauds or hacking). In arguments submitted by police representatives and by the Ministry of the Interior, some examples of individual cases where the stored data helped (or hampered an investigation when missing) were repeatedly mentioned. However, it has not been proven by any evidence shown to the Court that general and indiscriminate data retention would improve the ability of the police to investigate crimes.
The Court also did not annul the partially problematic parts of the legislation, such as the data retention period (six months), the volume of data to be retained, or too broad range of criminal cases where data may be required. Furthermore, the Court has not remedied the provisions of the Police Act that allow data to be requested without court authorisation in cases of search for wanted or missing persons or the fight against terrorism.
In its decision, the Constitutional Court acknowledges that stored data are very sensitive and that in some cases the sensitivity of so-called “metadata” may even be greater than the retention of the content of the communications. Thus, the retention of communications data represents a significant threat to individuals’ privacy. Despite all of this, the Court discarded IuRE’s claim to declare data retention law unconstitutional.
IuRe disagrees with the outcome of this procedure in which the Court has come to a conclusion on the constitutional conformity of the existing Czech data retention legislation. Considering the wide support for the complaint, IuRe will work on getting at least a part of existing arrangements changed by legislative amendments. In addition to this, we will consider the possibility for the EC to launch infringing proceedings or initiate other judicial cases, since we strongly believe that the existing bulk data retention of communications data in Czech law still contravenes the aforementioned CJEU decisions on mass data retention.
The Supreme Court denied Facebook’s application in substance as the company was unable to substantiate its appeal. As a result, the Supreme Court decided not to take the actions requested by Facebook.
“Facebook again likely invested, again, millions to stop this case from progressing. It is good to see that the Supreme Court has not followed Facebook’s arguments that were in total denial of all existing findings so far. We are now looking forward to the hearing at the Court of Justice in Luxembourg next month,” said Max Schrems, complainant and chairperson of noyb.
The case follows a complaint by privacy lawyer Max Schrems against Facebook in 2013. More than six years ago, Edward Snowden revealed that Facebook allowed the US secret services to access personal data of Europeans under surveillance programs like “PRISM”. So far, the Irish DPC has not taken any concrete actions, despite the clear demands of the complaint to stop the EU-US data transfers by Facebook.
The case was first rejected by the Irish Data Protection Commissioner in 2013. It was then subject to a judicial review by the Irish High Court which made a reference to the Court of Justice of the European Union (CJEU). The latter ruled in 2015 that the so-called “Safe Harbor” agreement that allowed EU-US data transfers is invalid judgment in C-362/14 and that the Irish DPC must investigate the case.
The investigation lasted only a couple of months between December 2015 and spring of 2016. Instead of deciding over the complaint, the DPC filed a lawsuit against Facebook and Mr. Schrems at the Irish High Court in 2016, with a view to sending further questions to the CJEU. After more than six weeks of hearings, the Irish High Court found that the US government had engaged in “mass processing” of Europeans’ personal data and referred eleven questions to the CJEU for the second time in 2018.
In an unprecedented application made thereafter, Facebook has tried to stop the reference by asking the Irish Supreme Court to advise the High Court on the reference. The CJEU announced that it plans to hear the case (now C-311/18) on 9 July 2019 – about six years from the filing of the original complaints.
After a judgment of the CJEU, the DPC would have to finally decide over the complaint for the first time. This decision would again be subject to possible appeals by Facebook or Mr. Schrems to the Irish Court.
Epicenter.works published a report in January 2019 which, among other things, surveys regulatory action based on the annual net neutrality reports by the NRAs. Port blocking is a severe form of traffic management since entire services, such as hosting of email or web servers by the end-user, are suppressed. This may be justified in certain situations, but requires a rigorous assessment under Article 3(3) third subparagraph, point b (preserve the integrity of the network) of Europe’s Net Neutrality Regulation (2015/2120).
Port blocking is generally quite easy to detect with network measurement tools. This is also noted in section 4.1.1 of BEREC’s Net Neutrality Regulatory Assessment Methodology (BoR (17) 178). Other forms of discriminatory traffic management are harder to detect. Based on this, it seems a reasonable conjecture to take NRA enforcement action on port blocking as indicative of the rigorousness of wider enforcement practices regarding traffic management. Unfortunately, detailed information on port blocking cases is not contained in most NRAs’ net neutrality reports.
Since the publication of the Net Neutrality Guidelines in August 2016, BEREC has launched a project to create an EU-wide network measurement tool, expected in late 2019. The measurement tool is based on the core principles of open methodology, open data, and open source. This means that the tool can be deployed on many devices, used by many end-users, and that the data generated through “crowdsourcing” by end-users (subscribers of internet access services, IAS) can be analysed by NRAs and other interested parties. In the opinion of EDRi, effective use of the forthcoming measurement tool, with crowdsourced measurement by end-users, will be a milestone in supervision and enforcement actions for traffic management practices.
Among other things, the measurement tool can be used for detection of unreasonable traffic management practices, establishing the real performance and Quality of Service (QoS) parameters of an IAS, assessing whether IAS are offered at quality levels that reflect advances in technology, and assessing whether the provision of specialised services risks deteriorating the available or general quality of IAS for end-users.
All of these tasks are specific obligations for NRAs under the Open Internet Regulation. As EDRi has highlighted before, the crowdsourcing aspect of the deployment of the measurement tool is very important as single measurements can contain a large element of noise, for example because of characteristic of the specific testing environment. In the aggregate, the noisy element can be expected to “wash out”, leaving the effect of the IAS traffic management practices or other network design choices by IAS providers.
When a measurement tool developed by BEREC is freely available to NRAs, the Guidelines on Article 5 of the Regulation should be updated to contain specific requirements and recommendations for the use of network measurement tools in the NRA supervision tasks. NRAs should, of course, be free to choose between their own measurement tools and methodology and the one offered by BEREC to all NRAs.
The Regulation does not per se require NRAs to establish or certify a monitoring mechanism. Needless to say, the Guidelines cannot change that. Therefore, most provisions in the Guidelines related to network measurement tools will have to be recommendations for NRAs.
However, the Regulation specifically requires NRAs to closely monitor and ensure compliance with Article 3 and 4 of the Regulation. While NRAs should be free to choose their own regulatory strategies, allowing these strategies to be adapted to the local “market” conditions and need for enforcement action, some proactive element is required on behalf of NRAs. Simply responding to end-user complaints cannot be sufficient to satisfy the obligation under Article 5.
In the opinion of EDRi, it will be very difficult for NRAs to fulfil their monitoring obligations under Article 5 without some form of quantitative measurement from the IAS network. The last sentence of recital 17 of the Regulation oncretely requires network measurements of latency, jitter and packet loss by NRAs to assess the impact of specialised services.
BEREC’s Guidelines with recommendations on the use of crowdsourced network measurements will have two positive implications for the net neutrality landscape in Europe. For the NRAs that follow the recommendations, and actively use the BEREC measurement tool, we will have quantitative monitoring of the compliance with articles 3 and 4 that is harmonised and comparable across EU Member States. This will, in itself, be hugely beneficial, and contribute to a consistent application of the net Neutrality Regulation.
In Member States where the NRA decides not to use the BEREC measurement tool (or its own), the recommendations in the Net Neutrality Guidelines could potentially facilitate shadow monitoring reports by civil society or consumer organisations. Of course, this can also be done without recommendations in the BEREC Guidelines or even with alternative measurement tools (than the one developed by BEREC), but adhering to the BEREC recommendations would create results that can be more easily compared with for example NRA net neutrality reports in Member States where the BEREC measurement tools is actively used.
EDRi will be pleased to contribute draft amendments to the Guidelines in order to formally incorporate a network measurement tool and crowdsourced measurements in the IAS network by end-users.
Three months before the new Serbian Law on Personal Data Protection becomes applicable, EDRi member SHARE Foundation asked 20 data companies from around the world – including Google and Facebook – to appoint representatives in Serbia as required by the new law. This is crucial for providing Serbian citizens and competent authorities with a contact point for all questions around the processing of personal data.
The new Law on Personal Data Protection in Serbia is modelled after the EU’s General Data Protection Regulation (GDPR) and creates an obligation for almost all large data companies to appoint representatives in the country. As soon as companies such as Google, Facebook, Amazon, Netflix or other IT giants offer products and services in Serbia for which it collects or processes personal data, it must appoint a representative. This can be a natural or legal person to which citizens can address their questions regarding their personal data rights. The representative must also cooperate with the Commissioner for Information of Public Importance and Personal Data Protection of the Republic of Serbia.
Google, for instance, has long recognised Serbia as a significant market and has adapted many services such as Gmail, YouTube, Google Chrome and Google Search to the local market. Additionally, Google targets Serbian citizens with localised advertisements and monitors their behaviour through cookies and other tracking technologies. Facebook is also available in Serbian and has about three million users in Serbia and collects and process huge amounts of personal data in order to profile them and show them targeted ads as described in SHARE Lab’s Facebook algorithmic factory research.
But because Serbia is not yet member of the EU, these companies do not grant Serbian users the same privacy protections as EU citizens. With permanent company representatives in Serbia, however, it would be more likely that Serbian citizens exercise their rights or initiate proceedings before competent authorities. This is why SHARE Foundation sent open letters to demand the appointment of representatives in Serbia to the following companies: Google, Facebook, Amazon, Twitter, Snap Inc – Snapchat, AliExpress, Viber, Yandex, Booking, Airbnb, Ryanair, Wizzair, eSky, Yahoo, Netflix, Twitch, Kupujem prodajem, Toptal, GoDaddy, Upwork.