08 Jul 2020

Digital rights for all

By Sarah Chander

In this article we set out the background to EDRis’ work on anti-discrimination in the digital age. Here we take the first step to explore anti-discrimination as a digital rights issue, and then, what can EDRi do about it? The project is motivated by the need to recognise how oppression, discrimination and inequality impact the enjoyment of digital rights, and to live up to our commitment to uphold the digital rights of all.

The first half of 2020 has brought with it challenges and shifts of a global scale. From COVID-19 to #BlackLivesMatter – these events necessarily impact EDRi’s work as issues of digital and human rights – our privacy, our safety, and our freedoms, online and off. Not only have these events brought issues of privacy and surveillance to the forefront of global politics, they also teach us about vulnerability.

Vulnerability is not a new concept to digital rights. It is core to the fight to defend rights and freedoms online – we are vulnerable to targeted advertising, to exploitation of our personal data, to censorship, and to increased surveillance. Particularly in times of crisis, this vulnerability is at the same time exposed as it is exacerbated, with increased surveillance justified for the public good.

How exactly can we understand vulnerability in terms of digital rights? In many senses, this vulnerability is universal. Ever-encroaching threats to our privacy, state surveillance, the mining of data on our personal lives for profit, are all universal threats facing individuals in a digital age.

Yet – just as we have seen that the myth of universal vulnerability in the face of Coronavirus debunked, we are also learning that we are not equally vulnerable to threats to privacy, censorship and surveillance. State and private actors abuse their power in ways that exacerbate injustice, threatens democracy and the rule of law. The way technologies are deployed often amplifies inequalities, especially when location and/or biometric data are used. Taking a leaf out of the book of anti-racism movements – instead of being ‘vulnerable’ to discrimination, exploitation and other harms, we know they are imposed on us. Rather than vulnerable, some groups are marginalised, as active processes with people, institutions and structures of power as the cause.

Going forward, an awareness of how marginalised groups enjoy their digital rights is crucial to a better defence and protection for all. From the black, brown and roma communities who are likely to be impacted by data-driven profiling, predictive policing, and biometric surveillance; the mother who only sees online job advertisements that fit her low-income profile; the child whose online learning experience should not be tainted by harmful content; the undocumented person who does not access health services due to the expectation of deportation and data-sharing, the queer and trans people who rely on anonymity to ensure a safe experience online, the black woman who has had her account suspended for using anti-racist terminologies, to the protester worried about protecting their identity, infringements of ‘digital rights’ manifest differently. Often, the harm cannot be corrected with a GDPR fine alone. It cannot be resolved with better terms and conditions. This is not just a matter of data protection, but of broader violations of human rights in a digital context.

These wider questions of harms and infringements in the digital age will challenge our existing frameworks. Is there a universal ‘subject’ for digital rights? Who are we referring to most often under the term ‘user’? Does this fully recognise the varying degrees of harm we are exposed to? Will the concept of rights holders as ‘users’ help or hinder this nuanced approach? Beyond ‘rights’, how do ideas of equality and justice inform our work?

EDRi members such as Privacy International, have denounced data exploitations and how marginalised groups are disproportionately affected by digital rights violations. Panoptykon have explored how algorithmic profiling systems impact the unemployed in Poland, and integrate the risks of discrimination into their analysis of why the online advertising system is broken. At Privacy Camp, EDRi members are reflecting on how children’s rights, the issues of hate speech online impact our work as a digital rights network. Building on this work EDRi is mapping the organisations, projects and initiatives in the European digital rights field which include a discrimination angle, or that explore how people in different life situations experience digital rights. Once we have a picture of which work is ongoing in the field and the main gaps, we will explore how EDRi can move forward, potentially including further research, campaigns, or efforts to connect digital and non-digital organisations.

We hope that this project will help us to meet our commitment to uphold digital rights for all, and to challenge power imbalance. We are learning that a true universal approach recognises marginalisation in order to contest it. In order to protect digital rights for all we must understand these differences, highlight them, and then fight for collective solutions.

Read more:

Who They Target – Privacy International
https://privacyinternational.org/learn/who-they-target

Profiling the unemployed in Poland: social and political implications of algorithmic decision-making (2015)
https://panoptykon.org/sites/default/files/leadimage-biblioteka/panoptykon_profiling_report_final.pdf

The digital rights of LGBTQ+ people: When technology reinforces societal oppressions (17.07.19)
https://edri.org/the-digital-rights-lgbtq-technology-reinforces-societal-oppressions/

10 Reasons Why Online Advertising is Broken (09.01.2020)
https://en.panoptykon.org/online-advertising-is-broken

More than the sum of our parts: a strategy for the EDRi Network (27.05.20)
https://edri.org/more-than-the-sum-of-our-parts-a-strategy-for-the-edri-network/

COVID-Tech: Surveillance is a pre-existing condition (27.05.2020)
https://edri.org/surveillance-is-a-pre-existing-condition/

close
08 Jul 2020

Europol: Non-accountable cooperation with IT companies could go further

By Chloé Berthélémy

There is an ongoing mantra among law enforcement authorities in Europe according to which private companies are indispensable partners in the fight against “cyber-enabled” crimes as they are often in possession of personal data relevant for law enforcement operations. For that reason, police authorities increasingly attempt to lay hands on data held by companies – sometimes in disregard to the safeguards imposed by long-standing judicial cooperation mechanisms. Several initiatives at European Union (EU) level, like the proposed regulation on European Production and Preservation Orders for electronic evidence in criminal matters (so called “e-evidence” Regulation), seek to “facilitate” that access to personal data by national law enforcement authorities. Now it’s Europol’s turn.

The Europol Regulation entered into force in 2017, authorising the European Police Cooperation Agency (Europol) to “receive” (but not directly request) personal data from private parties like Facebook and Twitter directly. The goal was to enable Europol to gather personal data, feed it into its databases and support Member States in their criminal investigations. The Commission was supposed to specifically evaluate this practice of reception and transfer of personal data with private companies after two years of implementation (in May 2019). However, there is no public information on whether the Commission actually conducted such evaluation, what were its modalities as well as its results.

Regardless of the absence of this assessment’s results and of a fully-fledged evaluation of Europol’s mandate, the Commission and the Council consider the current legal framework as too limiting and therefore decided to revise it. The legislative proposal for a new Europol Regulation is planned to be released at the end of this year.

One of the main policy option foreseen is to lift the ban on Europol’s ability to proactively request data from private companies or query databases managed by private parties (e.g. WHOIS). However, disclosures by private actors would remain “voluntary”. Just as the EU Internet Referral Unit operates without any procedural safeguards or strong judicial oversight, this extension of Europol’s executive powers would barely comply with the EU Charter of Fundamental Rights – that requires that restrictions of fundamental rights (on the right to privacy in this case) must be necessary, proportionate and “provided for by law” (rather than on ad hoc “cooperation” arrangements).

This is why, in light of the Commission’s consultation call, EDRi shared the following remarks:

  • EDRi recommends to first carry out a full evaluation of the 2016 Europol Regulation, before expanding the agency’s powers, in order to base the revision of its mandate on proper evidence;
  • EDRi opposes the Commission’s proposal to expand Europol’s powers in the field of data exchange with private parties as it goes beyond Europol’s legal basis (Article 88(2));
  • The extension of Europol’s mandate to request personal data from private parties promotes the voluntary disclosure of personal data by online service providers which goes against the EU Charter of Fundamental Rights and national and European procedural safeguards;
  • The procedure by which Europol accesses EU databases should be reviewed and include the involvement of an independent judicial authority;
  • The Europol Regulation should grant the Joint Parliamentary Scrutiny Group with real oversight powers.

Read our full contribution to the consultation here.

Read more:

Europol: Non-transparent cooperation with IT companies (18.05.16)
https://edri.org/europol-non-transparent-cooperation-with-it-companies/

Europol: Delete criminals’ data, but keep watch on the innocent (27.03.18)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

Oversight of the new Europol regulation likely to remain superficial (12.07.16)
https://edri.org/europol-delete-criminals-data-but-keep-watch-on-the-innocent/

(Contribution by Chloé Berthélémy, EDRi policy advisor)

close
08 Jul 2020

Web browser privacy: ARTICLE 19 welcomes initiatives to protect users

By Article 19

There are widespread web tracking practices that undermine users’ human rights. However, safeguards against web tracking can and are being deployed by various service providers. EDRi member ARTICLE 19, and more generally EDRi as a whole, support these initiatives to protect user privacy and anonymity as part of a wider shift toward a more rights-respecting sector.

Web browsers are our guide across the internet. We use them to connect with others around the globe, orient ourselves, and find what we need or want online. The resulting trail of data that we generate of our preferences and actions has been exploited by the increasingly interdependent business models of the online advertising industry and web browsers. As advertising publishers, agencies, and service providers aim to maximise profit from advertisers by delivering increasingly personalised content to users, web browsers have strong incentives to collect as much data as possible about what each user searches, visits, and clicks on to feed into these targeted advertising models.

These practices not only threaten users’ right to privacy, but can also undermine other fundamental rights, such as freedom of expression and access to information and non-discrimination.

How we are tracked online

A number of mechanisms used by web browsers for ad targeting and tracking can also be used to cross-reference and track users, block access to websites, or discriminate among users based on profiles generated about them from their online activities and physical location. These mechanisms include:

  • Web usage mining, where the underlying data, such as pages visited and time spent on each page, is collected as clickstreams;
  • Fingerprinting, where information such as a user’s OS version, browser version, language, time zone, and screen settings are collected to identify the device;
  • Beacons, which are graphic images placed on a website or email to monitor the behaviour of the user and their remote device; and
  • Cookies, which are small files holding client and website data that can remain in browsers for long periods of time and are often used by third parties.

Being subject to these practices should not be the non-negotiable price of using the internet. An increasing number of service providers are developing and implementing privacy-oriented approaches to serve as alternatives – or even the new default – in web browsing. These changes range from stronger, more ubiquitous encryption of data to the configuration and use of trusted servers for different tasks. These safeguards may be deployed as entirely new architectures and protocols by browsers and applications, and are being deployed at different layers of the internet architecture.

Encrypting the Domain Name System (DNS)

One advancement has been the development and deployment of internet protocols that support greater and stronger encryption of the data generated by users when they visit websites, redressing historical vulnerabilities in the Domain Name System (DNS). Encrypted Server Name Indication (eSNI) encrypts each domain’s identifiers when multiple domains are hosted by a single IP address, so that it is more difficult for Internet Service Providers (ISPs) and eavesdroppers to pinpoint which sites a user visits. DNS-over-HTTPS (DoH) sends encrypted DNS traffic over the Hypertext Transfer Protocol Secure (HTTPS) port and looks up encrypted queries made in the browser using the servers of a trusted DNS provider. These protocols make it difficult to detect, track, and block users’ DNS queries and therefore introduce needed privacy and security features to web browsing.

Privacy-oriented web browsers

Another shift is in the architectures and advertising models of web browsers themselves. Increasingly popular privacy browsers such as Tor and Brave help protect user data and identity. Tor encrypts and anonymises users’ traffic by routing it through the Tor network while Brave anonymises user authentication by using the Privacy Pass protocol, which allows users to prove that they are trusted without revealing identifying information to the browser. Brave’s efforts to develop a privacy-centric model for web advertising, including a protocol that confirms when a user observes an ad without revealing who they are and an anonymised, blockchain-based system to compensate publishers, have been closely followed by Apple and Google, which aim to standardise their own web architectures including Apple Webkit’s ad click attribution technology and Google Chrome’s Conversion Measurement API.

Although there are some differences, Brave’s, Apple’s, and Google’s advertising models all include mechanisms to limit the amount of data passed between parties and the amount of time this data is kept in their systems, disallow data such as cookies for reporting purposes, delay reports randomly to prevent identifiability through timestamp cross-referencing, and prevent arbitrary third parties from registering user data. As such, they not only protect users’ privacy and anonymity, but also prevent cross-site tracking and user profiling.

Despite protocols such as eSNI and DoH and recent privacy advances in web browser advertising models and architectures, tracking of online activities continues to be the norm. For this reason, service providers that are working toward industry change are advocating for the widespread adoption of secure protocols and the standardisation of web browsing privacy models to redress existing vulnerabilities that have been exploited to monetise users’ data without their knowledge, monitor and profile them, and restrict the availability of content.

If privacy-oriented protocols and privacy-respecting web browsing models are standardised and widely adopted by the sector, respect for privacy will become an essential parameter for competition among not only web browsers, but also ISPs and DNS servers. This change can stimulate innovation and provide users with the choice between more and better services that guarantee their fundamental rights.

Challenges for privacy-enhancing initiatives

While these protocols and models have been welcomed by a number of stakeholders, they have also been challenged. Critics claim that these measures make it more difficult, if not impossible, to perform internet blocking and filtering. They claim that, as a result, privacy models undermine features such as parental controls and thwart the ability of ISPs and governments to identify malware traffic and malicious actors. These challenges rest on the assumption that there is a natural trade-off between the power of parties who retain control of the internet and the privacy of individual users.

In reality, however, technological advancement constantly occurs as a whole; updated models lead to updated tools and mechanisms. Take DoH and its impact on parental controls as an example. DoH encrypts DNS queries, rendering most current DNS-filtering mechanisms used for parental controls obsolete; these mechanisms rely on DNS packet inspection that cannot be done on encrypted data without intercepting and decrypting the stream first. In response, both browsers and DNS servers are developing new technologies and services. Mozilla launched its “Canary Domains” mechanism, where queries for ISP-restricted domains are flagged and trigger DoH to be disabled. DoH-compatible DNS server providers like cleanbrowsing.org implement their own filtering policies at the resolver level. While these responses do not mitigate the need to ensure users’ privacy and access to information rights through strong legal and regulatory protections, accountability and transparency of service providers to users, and meaningful user choice, they demonstrate that the real benefits of browser privacy and security measures should not be thwarted on the basis of perceived threats to the status quo.

Leadership opportunity for the EU

In the European Union, the adoption of the General Data Protection Regulation (GDPR) has obliged all stakeholders in the debate to recognise and comply with data protection and privacy-by-design principles. Moreover, the Body of European Electronic Communication Regulators, whose main task is to contribute to the development and better functioning of the EU internal market for electronic communications networks and services, has identified users’ empowerment among its priorities. These dynamics create an opportunity for EU actors to advance global leadership in efforts toward a privacy-oriented internet infrastructure.

Recommendations 

ARTICLE 19 strongly supports initiatives to advance browser privacy, including the implementation of protocols such as eSNI and DoH that facilitate stronger, more ubiquitous encryption of the Domain Name System and privacy-centric web advertising models for browsers. We believe these initiatives will lead to greater respect for privacy and human rights across the sector. In particular, we recommend that:

  • ISPs must help decentralize the encrypted DNS model by deploying their own DoH-compatible servers and encrypted services, taking advantage of the relatively low number of devices currently using DoH and the easy adoption curve it implies;
  • Browsers and DNS service providers should not override users’ configurations regarding when to enable or disable encryption services and which DNS service provider to use. Meaningful user choice should be facilitated by clear terms of service and accessible and clearly defined default, opt-in, and opt-out settings and options;
  • Browsers must additionally ensure that, even as they build privacy-friendly revenue generation schemes and move away from targeted ad models, all of these practices are transparent and clearly defined for users, both in the terms of service and codebase;
  • Internet standards bodies should encourage the inclusion of strong privacy and accountability considerations in the design of protocol specifications themselves, acknowledging the effects of these protocols in real-life testing and deployment; and
  • Civil society must promote the widespread adoption of secure tools, designs, and protocols through information dissemination to help educate the community and empower users’ choices;

Finally, Article 19 urges internet users to support the development and application of privacy-based tools that do not monetise their data by demanding products from their service providers that better protect their privacy.

Read more:

Ethical Web Development booklet:
https://edri.org/files/ethical_web_dev_web.pdf

US companies to implement better privacy for website browsing (29.08.2018)
https://edri.org/us-companies-to-implement-better-privacy-for-website-browsing/

Internet protocol community has a new tool to respect human rights (15.11.2017)
https://edri.org/internet-protocol-community-has-a-new-tool-to-respect-human-rights

(Contribution from Maria Luisa Stasi, from EDRi member ARTICLE 19)

close
08 Jul 2020

Spain: Catalan government agrees to improve privacy in schools

By Xnet

The Catalan Department of Education has signed an agreement accepting the plan proposed by Xnet, EDRi member from Spain, titled “Privacy and Democratic Digitization of Educational Centers,” to guarantee the privacy of data and the democratic digitization of schools. The plan foresees the creation of a software-pack and protocols that ensure the educational establishments have alternatives to what until now seemed the only option: the technological dependence on Google and its attached elements, with worrying consequences on individual data.

Things can be different. With this plan, Xnet seeks to create an organic system in educational institutions that guarantees the use of auditable and accessible technologies and that said technologies contribute to preserving the rights of the educational community.

The key points of the project are:

  • Safe and human rights-compliant servers;
  • Auditable tools already in use, added in a stable pack;
  • Training that updates the culture in educational centers in favor of the use of digital technologies that respect human rights.

In Spain and in many other places, COVID-19 has shown how late institutions are in digitisation and their will to understand it. Digital-related public policies often range from carefree technosolutionism to technophobic neo-ludism. The result of these policies in which the educational community is being lectured about the dangers of technology while forced to bend to the will of large digital corporations, is that those dominant platforms already control the vast majority of educational establishments … and therefore the behavior of the students, their families and teachers.

In order to be a society suitable for the digital age in which we live in, it is not necessary to know about technology, nor to be more afraid of it than any other tool. This means that digitisation should be undertaken in an accessible and rational way for everyone; a truly democratic digitisation that improves society. Books have served to build our societies. Nobody expects that whoever wants to use or teach them has to know bookbinding. Perhaps this is where the initial problem arises. If the other subjects are taught by experts on these subjects, why in the field of digitisation do we often only resort to “technicians” and security officers to warn of their dangers?

The notions of network and connectivity allow us to operate in an agile way, having the ability to start processes being few in numbers but having a huge impact, even without the need of advanced technological knowledge.

Read more:

(In Spanish) Propuesta para la excelencia en la privacidad de datos y la digitalización democrática de los centros educativos (03.06.2020)
https://xnet-x.net/privacidad-datos-digitalizacion-democratica-educacion-sin-google/

(In Spanish) El encuadernador y el exorcista: sobre el futuro de la digitalización en la educación (y en todo lo demás) (10.06.2020)
https://blogs.publico.es/dominiopublico/33360/el-encuadernador-y-el-exorcista-sobre-el-futuro-de-la-digitalizacion-en-la-educacion-y-en-todo-lo-demas/

(Contribution by Simona Levi, from EDRi member Xnet).

close
08 Jul 2020

Welcoming our new Senior Communications and Media Manager Gail Rego!

By EDRi

European Digital Rights is proud to announce that Gail Rego has joined the team at the Brussels office as the new Senior Communications and Media Manager. Gail is responsible for promoting the work of the EDRi network, improving the communication and strengthening the public identity of EDRi as well as developing stronger relationships with media and the press specifically.

Gail has a decade of experience in communication and community building roles in the UAE, Colombia, Malaysia, Kenya, and Belgium. She started working on tech and child rights related projects and campaigns during her role as Head of Communications and Membership at Missing Children Europe. This included campaigns against child tracking apps, a multi-stakeholder project to improve missing children investigations using blockchain, geofencing, social media analysis etc. and the NotFound web app that replaces website 404 pages with posters of missing children. Previously, she worked as the Communications and Partnerships Manager at the European Venture Philanthropy Association. She is a member of Young Feminist Europe and the People of Colour Brussels group.

Gail is a human rights activist passionate about dismantling systems of oppression that continue to silence and threaten women, people of colour, migrants, and other marginalized groups. She hopes to help bridge the gap between inclusive communities, intersectional identities and technologies such as AI to face the growing inequality, misinformation and polarization of societies. Read her blogpost on how algorithmic bias prevents access to jobs here.

Twitter_tweet_and_follow_banner

close
01 Jul 2020

EU must let its crown jewel shine: GDPR needs progress

By Diego Naranjo

On 24 June, the European Commission published the Communication reviewing of the two years of application of the General Data Protection Regulation (GDPR) The Communication received input from the multistakeholder expert group on the application of the GDPR, of which EDRi members Access Now and Privacy International belong to. EDRi welcomes the publication of the review at a time where data protection needs to be reinforced and not only celebrated.

The GDPR is considered one of the “crown jewels” of the European legislation. However, 2 years after the Regulation entered into force, the GDPR has been increasingly receiving criticism from data protection activists (citing the lack of “teeth” of the Regulation) or, from the Big Tech side, because of accusations that GDPR stifles innovation.

What’s the GDPR Impact Assessment?

The Commission review report highlights many of the similar analysis that civil society groups have raised during the last two years, namely:

  • There are not enough joint operations or investigations for cross-border cases which could have led to a more harmonised enforcement.
  • Member states need to allocate “sufficient human, financial and technical resources to national data protection authorities”.
  • Despite the “harmonised” legislation, different implementations still exist in areas such as the age of children consent for processing data, balancing freedom of expression and information with data protection rights, as well as derogations from the general prohibition to process certain categories of personal data.
  • Individuals are not fully empowered yet, for example in the case of the right to data portability.
  • It is unclear how to adapt the GDPR to “new” technologies, such as contact tracing apps and facial recognition.

Alexa, tell me where to go from here

EDRi welcomes the request from the Commission’s Communication to ask for stronger enforcement by asking DPAs and member states to ensure harmonised enforcement, the need for adequate funding for DPAs , as well as the creation of specific guidelines when needed. If there is no adequate progress, we agree with the Commission that infringement procedures to ensure that Member States comply with GDPR are an adequate tool at their disposal.

GDPR was the best possible outcome we could achieve during its contemporary political scenario. Now it is the time to ensure that all the work from activists, policy makers and academics were worth their efforts. We must ensure that GDPR’s complementary legislation, the ePrivacy Regulation, is strengthened and adopted during the German Presidency of the Council of the EU.

Read more:

Communication from the European Commission on the review of the GDPR (24.06.2020)
https://ec.europa.eu/info/sites/info/files/1_en_act_part1_v6_1.pdf

Access Now welcomes European Commission’s call for stronger enforcement of the GDPR (24.06.2020)
https://www.accessnow.org/access-now-welcomes-european-commissions-call-for-stronger-enforcement-of-the-gdpr/

Privacy International: GDPR – 2 years on (22.05.2020)
https://privacyinternational.org/news-analysis/3842/gdpr-2-years

close
25 Jun 2020

Open Letter: EDRi calls on IBM to clarify stance on facial recognition

By EDRi

On 25 June, EDRi sent an open letter to the CEO of IBM in response to their 8 June statement on racial equality and facial recognition in the US.

EDRi asked IBM to provide more information about what will change as a result of their commitment to end general purpose facial recognition, and whether these issues will lead to changes in IBM’s contracts and work in the EU.

In May 2020, EDRi’s 44 civil society organisations launched the first European coalition to call on the EU for a “Ban on Biometric Mass Surveillance” including public facial recognition. We agree with IBM that biometric surveillance technologies can have seriously damaging impacts on our rights and societies and have no place in a democratic society.

Read the full letter here or find it below:

Dear Mr. Krishna,
Chief Executive Officer of IBM

We are European Digital Rights (EDRi), a coalition of 44 digital rights organisations across Europe, working to protect fundamental rights in the digital environment. We read your recent statement on facial recognition with great interest and hope, and were pleased to see Amazon and Microsoft follow suit.

We, too, have been advocating for protections against the harms caused by invasive, discriminatory facial recognition and other forms of biometric mass surveillance, and are heartened to see influential companies such as IBM stepping up to take action. Our own call to action has urged the EU to ban biometric mass surveillance, and our members are working at a national level to increase awareness and drive positive changes to protect people from the threats of surveillance.

We would greatly appreciate the opportunity for a dialogue between IBM and EDRi to better understand the specific actions that you will be taking to act upon your recent commitments. It would be very powerful if we could show IBM as an example for other companies.

We will make this letter and your response public, and therefore would like to ask for your written reply by 10 July. We would also like to suggest a call to discuss the details of our questions in the meantime.

In particular, we are seeking insight into the following:

  1. Which existing contracts will be stopped/cancelled as a result of IBM’s new position?
  2. Which applications specifically will IBM stop developing and selling in response to the new position? Are there other applications that IBM would consider within the remit of this position, but which have already been stopped? When and why were they stopped?
  3. What are the features of the applications that will be stopped?
  4. Does IBM have government contracts at the moment that fall into these categories in the United States and elsewhere? Which governments are IBM’s business partners for facial (or other biometric) recognition, analysis or processing software products?
  5. In the statement, IBM states that it opposes use of technology “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” Are these values and principles reinforced in IBM’s contracts with clients/customers or in a human rights policy or statement? How is compliance with these values and Principles ensured?
  6. What are IBM’s structures, policies and processes to meet and demonstrate human rights compliance? Does IBM conduct human rights impact assessments or human rights due diligence on its products, in particular taking into account privacy concerns? Which stakeholders are included in IBM’s analyses?
  7. Was the recent statement developed in conjunction with human rights experts, and are any human rights experts supporting IBM with its implementation? Did IBM consult communities most impacted by use of its technology?
  8. In the statement, IBM speaks of “general purpose” technology. How do you define this, and does this mean that IBM anticipates that there will be exceptions? How are exceptions being justified, given the similarly violatory nature of both general purpose and specific purpose tools?
  9. Also linked to the “general purpose”, what specific purposes would IBM not support with your technology and by what criteria? What specific purposes would IBM therefore support?
  10. In the statement, IBM refers to “IBM facial recognition and software analysis”. Does IBM continue to (re)sell general purpose software from others?
  11. In the statement, IBM talks about “domestic law enforcement agencies”. What about military, border police, intelligence, security services etc?
  12. IBM places the statement in the context of federal policing, national policy and other US-specific areas. Is IBM taking action outside of the US context, recognising that such technologies are equally harmful in the EU and other regions?
  13. Will IBM apply the commitments in this statement to other areas of business or technologies such as smart city and smart policing projects?

We are looking forward to your response.

Sincerely,

Diego Naranjo, Head of Policy<close

24 Jun 2020

The threat on OTF as a wake up call for European digital sovereignty

By Diego Naranjo

Around 2 billion people in 60 countries are able to use the internet securely and without risks of being surveilled or censored. And all of this, thanks to the work done by a non-profit called Open Tech Fund (OTF) for only 15 million dollars a year. However, all of this may be over soon.

WTF is OTF?

OTF is an independent non-profit grantee of the United States Agency for Global Media (USAGM). OTF has supported crucial projects such as the security technology behind encryption in WhatsApp and Signal, discovering software vulnerabilities and creating censorship circumvention technologies that enable us to communicate securely. These secure technologies, although important for everyone, are obviously even more important for those who are at risk, such as human rights defenders, independent journalists, and individuals subject to censorship.

According to Save Internet Freedom Tech, there is a real risks (derived from corporate lobbying) that the new leadership at USAGM will “seek to dismantle OTF and re-allocate all of its US government funding to support a narrow set of anti-censorship tools without a transparent and open review process”. An open letter calling to ensure the work of OTF is open for signatories.

Digital sovereignty: a critical resilience strategy in Europe

For most of the critical infrastructures and services we use everyday, public funding is essential. As renown economist Mariana Mazzucato explains the Internet itself, GPS, the touchscreen display in your device, as well as the voice-activated personal assistant (Siri) are all a result of public funding. Same is the case for Google’s algorithm, that was funded by the National Science Foundation.

The European Union has taken some positive steps in this direction recently, especially with the FOSSA pilot project and the Next Generation Internet initiative. The threats on OTF, whether they materialise or not, should be a wake-up call for a European Commission that has set “digital sovereignty” as one of the key goals for the current term. If digital sovereignty means something, it means building the infrastructures, helping to create services, funding research and supporting critical civil society that make Europe resilient towards the security risks that an increasingly interconnected environment with growing remote work that a post-pandemic society will need. If with a very humble budget of 15 million dollars OTF could do all of that, what could we the EU do with a similar, or increased, budget? If digital sovereignty is to be a serious goal and not a buzz word, we need to direct resources to make that happen, sooner than later.

Read more:

Save Internet Freedom Tech
https://saveinternetfreedom.tech/

Taxpayers Helped Apple, but Apple Won’t Help Them (08.03.2013)
https://hbr.org/2013/03/taxpayers-helped-apple-but-app

Naomi Klein: How big tech plans to profit from the pandemic (13.05.2020)
https://www.theguardian.com/news/2020/may/13/naomi-klein-how-big-tech-plans-to-profit-from-coronavirus-pandemic

CEO of Open Technology Fund Resigns After Closed-Source Lobbying Effort (17.06.2020)
https://www.vice.com/en_us/article/935k5p/open-technology-fund-ceo-resigns

(Contribution by Diego Naranjo, EDRi Head of Policy)

close
24 Jun 2020

COVID-Tech: COVID-19 opens the way for the use of police drones in Greece

By Homo Digitalis

In EDRi’s series on COVID-19, COVIDTech, we explore the critical principles for protecting fundamental rights while curtailing the spread of the virus, as outlined in the EDRi network’s statement on the pandemic. Each post in this series tackles a specific issue at the intersection of digital rights and the global pandemic in order to explore broader questions about how to protect fundamental rights in a time of crisis. In our statement, we emphasised the principle that states must “defend freedom of expression and information”. In this fifth post of the series, we take a look at the issue of drone surveillance in Greece, and the legal provisions that has allowed it to emerge.

The COVID-19 pandemic has given rise to conventional and unconventional technologies deployed by public authorities across the EU to combat its spread. Some of these technologies have raised serious concerns as regards privacy and data protection of individuals. The use of drones for surveillance purposes is one of such technologies.

In October 2019, Greek law-makers reformed, via the Presidential Decree 98/2019, the applicable rules on police drones. The new legislation allows for the Hellenic Police to broadly use drones in policing and border management activities. We must bear in mind that before the adoption of these new provisions, the Hellenic Police could not deploy drones for such activities. Instead, police drones were allowed to be used in activities such as the prevention of forest fires or in search & rescue activities in the event of a natural disaster or in the aftermath of an accident.

A few months after the adoption of these new rules, in spring 2020, the Hellenic Police already managed to use them to their full extent, in order to ensure compliance with the lockdown measures against COVID-19.

A brief assessment of the new legal rules on police drones

The Presidential Decree 98/2019 consists of only one (!) paragraph and provides that the police may use drones to facilitate air support to policing, surveillance and transmission of information to ground police forces. This information may regard various police duties, such as:

  • preventing and combating crime”,
  • tackling illegal migration in border regions”, and
  • controlling order and traffic”.

These cases are described in the law rather vaguely, which, in addition to the broad scope of the duties itself, leaves a wide interpretation in the hands of the police for the cases they may employ drones and the information they may collect and share. The Presidential Decree does not specify, for example, that drones could be used only to fight serious crime subject to prior judicial authorisation. Thus, the new rules allow for an indiscriminate and blanket use of drones for any kind of policing and border management activities, opening the way for drone operations even for petty theft crimes without any prior authorisation.

Moreover, it is highly possible that during drone operations, images and video footage of identifiable individuals will be captured. Given the indiscriminate permission of the use of drones, the state surveillance in public spaces is likely to increase and create a serious interference with human rights such as privacy, data protection, freedom of expression and freedom of assembly. Thus, such a use could lead to a massive increase in the capabilities for omnipresent state surveillance, and catalyse human rights abuse.

Additionally, the applicable European and national data protection legislation shall be in force when personal data are processed and form part of a filing system or are intended to form part of a filing system. However, the Presidential Decree 98/2019 does not provide any details regarding data processing activities related to the use of drones. Moreover, it does not provide any safeguards or specific control mechanisms protecting against the abusive use of drones by the Hellenic Police (such as the retention period of the data collected, information to be made available to the data subjects, records of processing activities, logging, designation of a data protection officer, etc.). Finally, articles 27-28 of the Law Enforcement Directive and articles 65 & 67 of the Greek Law 4624/2019 foresee that the Hellenic Police shall, prior to any processing activities that use new technologies, consult the Hellenic DPA and carry out a data protection impact assessment. However, the Presidential Decree omits any reference to such obligations.

The use of drones during the COVID-19 lockdown measures

In April 2020, numerous news media reported that the Hellenic Police would deploy drones during the Easter holidays to ensure compliance with the lockdown measures against COVID-19. In addition to this, in April 2020 the Hellenic Deputy Minister of Citizen Protection, Mr. Oikonomou, confirmed that the Hellenic Police aimed to deploy drones during the Easter holidays in order to ensure compliance with the movement restriction measures related to COVID-19. These drones were used in urban areas, such as Athens and Thessaloniki, aiming at monitoring population’s movement.

In April 2020 Homo Digitalis filed an official query with the Ministry of Citizen Protection requesting more information about this deployment and notified the Hellenic DPA on this regard. The reply to this query is still pending. Moreover, Homo Digitalis published a related report analysing in depth all the aforementioned legal issues and highlighting the serious risks that arise from the deployment of drones by the Hellenic Police.

Homo Digitalis keeps a close eye on the related developments. For example, in June 2020 the Hellenic Police announced a public procurement contract of 136.000 euro for the acquisition of two drones in the context of the project HEFESTOS (Hellenic anti-Fraud Equipment and relevant trainings for Strengthening the Operability against Smuggling), while a few days ago the Western Greece Region concluded a contract with the Hellenic Police in order to acquire drones for policing activities within the framework of the project INTERREG 2014-2020. Finally, news media reported that drones are soon to be deployed in the Evros border with Turkey, as well.

Read more:

Ban Biometric Mass Surveillance! (13.05.2020)
https://edri.org/wp-content/uploads/2020/05/Paper-Ban-Biometric-Mass-Surveillance.pdf#page=14

(In Greek) Homo Digitalis, COVID-19 and Digital Rights Issues (22.04.2020) https://www.homodigitalis.gr/wp-content/uploads/2020/04/HomoDigitalis_Report_COVID19_and_Digital_Rights_in_Greece_22.04.2020_Final.pdf

(In Greek) Official Query to the Ministry of Citizen Protection (30.04.2020)
https://www.homodigitalis.gr/posts/6579

(In Greek) Presidential Decree 98/2019 (25.10.2019)
https://www.kodiko.gr/nomologia/document_navigation/570607/p.d.-98-2019

Homo Digitalis
https://www.homodigitalis.gr/

(Contribution by Eleftherios Chelioudakis & Antigoni Logotheti, from EDRi member Homo Digitalis, Greece)

close
24 Jun 2020

French Avia law declared unconstitutional: what does this teach us at EU level?

By Chloé Berthélémy

On 18 June, the French Constitutional Council, the constitutional authority in France, declared the main provisions of the “Avia law” unconstitutional. France’s legislation on hate speech was adopted in May despite being severely criticised from nearly all sides: the European Commission, the Czech Republic, digital rights organisations and LBGTQI+, feminist and antiracist organisations. Opposed to the main measures throughout the legislative process, the French Senate brought the law before the Constitutional Council as soon as it was adopted.

The Court’s ruling represents a major victory for digital freedoms, not only for French people, but potentially for all Europeans. In past years, France has been championing its law enforcement model for the fight against (potentially) illegal online content at the European Union (EU) level, especially in the framework of the Terrorist Content Regulation, currently in hard-nosed negotiations. The setback received after the Constitutional Court’s decision will likely re-shuffle the cards in the current and future European content regulation-related files.

The Avia law is “not necessary, appropriate and proportionate”

In its decision, the Constitutional Council held that certain provisions infringe “on freedom of speech and communication, and are not necessary, appropriate and proportionate to the aim pursued”. Looking at the details of the ruling, the following legal measures in the law that were used to strike down seemingly illegal content were quashed by the Court:

  • The sort of “notice-and-action” system by which any user can flag “manifestly illegal” content (among a long pre-set list of offenses) and the notified online service provider is required to remove it within 24 hours,
  • The reduction of the intermediary’s deadline to remove illegal terrorist content and child sexual abuse material to one hour after the receipt of a notification by an administrative authority.
  • All the best-efforts obligations linked to the unconstitutional removal measures above such as transparency obligations (in terms of access to redress mechanisms and content moderation practices, including the number of removed content, the rate of wrong takedowns,…)
  • The power given to the Conseil supérieur de l’audiovisuel (ie. French High Audiovisual Council) with an oversight mandate to monitor the implementation of those best-efforts obligations.

Plot twist!

The Court’s decision will have a decisive impact on the European negotiations on the draft Regulation against the dissemination of terrorist content online. The European Commission hastily published the draft legislation under pressure from France and Germany in 2018 looking towards a quick adoption to serve the Commission’s electoral communication strategy. However, since the trilogues started, the European Parliament and the Council of Member States have been facing a persistent deadlock regarding the proposal’s main measures.

In this context, the Constitutional Council’s ruling comes as a massive blow in the Commission’s and France’s well-rounded advocacy. In particular, France has been pushing to expand the definition of what constitutes a “competent authority” (institutions with legal authority to make content determinations) under the Regulation to include administrative (aka law enforcement) authorities. Consequently, law enforcement agents would be allowed to issue orders to remove or disable access to illegal terrorist content within an hour. The Council declared this type of measure as a clear breach of the French Constitution, pointing out the lack of judiciary involvement in the decision to determine whether a specific content published is illegal or not, and the incentives (in the form of strict deadlines and heavy sanctions) to over zealously block perfectly legal speech. It draws similar conclusions for the legal arrangements that address potential hate speech.

In general, the Council underlines that only the removal of manifestly illegal content can be ordered without a judge’s prior authorization. However, assessing that a certain piece of content is manifestly illegal requires a minimum of analysis, which is impossible in such a short time frame. Inevitably, this decision weakens the pro-censorship hardliners’ position in European debates.

Ahead of the Digital Services Act, a legislative package which will update the EU rules governing online service providers’ responsibilities, the European legislators should pay particular attention to this ruling to guarantee the respect of fundamental rights. EDRi and its members will continue to monitor the development of these files and engage with the institutions in the upcoming period.

Read more:

(In French) La Quadrature Du Net, Loi haine: le Conseil constitutionnel refuse la censure sans juge (18.06.2020)
https://www.laquadrature.net/2020/06/18/loi-haine-le-conseil-constitutionnel-refuse-la-censure-sans-juge/

EFF, Victory! French High Court Rules That Most of Hate Speech Bill Would Undermine Free Expression (18.06.2020)
https://www.eff.org/press/releases/victory-french-high-court-rules-most-hate-speech-bill-would-undermine-free-expression

Constitutional Council declares French hate speech ‘Avia’ law unconstitutional (18.06.2020)
https://www.article19.org/resources/france-constitutional-council-declares-french-hate-speech-avialaw-unconstitutional/

France’s law on hate speech gets a thumbs down (04.12.2019)
https://edri.org/frances-law-on-hate-speech-gets-thumbs-down/

(Contribution by Chloé Berthélémy, EDRi Policy Advisor)

close