The 8th annual Privacy Camp will take place in Brussels on 21 January 2020.
With the focus on “Technology and Activism”, Privacy Camp 2020 will explore the significant role digital technology plays in activism, enabling people to bypass traditional power structures and fostering new forms of civil disobedience, but also enhancing the surveillance power of repressive regimes. Together with activists and scholars working at the intersection of technology and activism, this event will cover a broad range of topics from surveillance and censorship to civic participation in policy-making and more.
The call for panels invites classical panel submissions, but also interactive formats such as workshops. We have particular interest in providing space for discussions on and around social media and political dissent, hacktivism and civil disobedience, the critical public sphere, data justice and data activism, as well as commons, peer production, and platform cooperativism, and citizen science. The deadline for proposing a panel or a workshop is 10 November 2019.
In addition to traditional panel and workshop sessions, this year’s Privacy Camp invites critical makers to join the debate on technology and activism. We are hosting a Critical Makers Faire for counterculture and DIY artists and makers involved in activism. The Faire will provide a space to feature projects such as biohacking, wearables, bots, glitch art, and much more. The deadline for submissions to the Makers Faire is 30 November.
Privacy Camp is an annual event that brings together digital rights advocates, NGOs, activists, academics and policy-makers from Europe and beyond to discuss the most pressing issues facing human rights online. It is jointly organised by European Digital Rights (EDRi), Research Group on Law, Science, Technology & Society at Vrije Universiteit Brussel (LSTS-VUB), the Institute for European Studies at Université Saint-Louis – Bruxelles (USL-B), and Privacy Salon.
Privacy Camp 2020 takes place on 21 January 2020 in Brussels, Belgium. Participation is free and registrations open in December.
The Body of European Regulators for Electronic Communications (BEREC) is currently in the process of overhauling their guidelines on the implementation of the Regulation (EU) 2015/2120, which forms the legal basis of the EU’s net neutrality rules. At its most recent plenary, BEREC produced new draft guidelines and opened a public consultation on this draft. The proposed changes to the guidelines seem like a mixed bag
5G network slicing
The new mobile network standard 5G specifies the ability of network operators to provide multiple virtual networks (“slices”) with different quality characteristics over the same network infrastructure, called “network slicing”. Because end-user equipment can be connected to multiple slices at the same time, providers could use the introduction of 5G to create new products where different applications make use of different slices with their associated quality levels. In its draft guidelines, BEREC clarifies that it‘s the user who has to be able to choose which application makes use of which slice. This is a welcome addition.
Zero-rating is a practice of billing the traffic used by different applications differently, and in particular not deducting the traffic created by certain applications from a user’s available data volume. This pratice has been criticised, because it reduces the choice of consumers regarding which applications they can use, and disadvantages new, small application providers against the big, already established players. These offers broadly come in two types: “open” zero-rating offers, where application providers can apply to become part of the programme and have their application zero-rated, and “closed” offers where that is not the case. The draft outlines specific criteria according to which open offers can be assessed.
Parental control filters
While content- and application-specific pricing is an additional challenge for small content and application providers, content-specific blocking can create even greater problems. Nevertheless, the draft contains new language that creates a carve-out for products such as parental control filters operated by the access provider from the provisions of the Regulation that prohibit such blocking, instead subjecting them to a case-by-case assessment by the regulators (as is the case for zero-rating). The language does not clearly exclude filters that are sold in conjunction with the access product and are on by default, and the rules can even be read as to require users who do not want to be subjected to the filtering to manually reconfigure each of their devices.
Deep Packet Inspection
Additionally, BEREC is also running a consultation on two paragraphs in the guidelines to which it hasn‘t yet proposed any changes. These paragraphs establish important privacy protections for end-users. They prohibit access providers from using Deep Packet Inspection (DPI) when applying traffic management measures in their network and thus protect users from having the content of their communications inspected. However, according to statements made during the debriefing session of the latest BEREC plenary, some actors want to allow providers to look at domain names, which themselves can reveal very sensitive information about the user and require DPI to extract from the data stream.
EDRi member epicenter.works will respond to BEREC’s consultation and encourages other stakeholders to participate. The proposed changes are significant. That is why clearer language is required, and users‘ privacy needs to remain protected. The consultation period ends on 28 November 2019.
A tragic failure by the police to save a teenage girl who was abducted but managed to call the 112 emergency number three times before she was murdered, led to the adoption of a new Emergency Ordinance in Romania. The law introduces several measures to improve the 112 system, one of which is mandatory SIM card registration for all prepaid users. Currently approximately ten million prepaid SIM cards are used in Romania.
This is the sixth legislative attempt in the last eight years to pass legislation for registering SIM card users despite a Constitutional Court decision in 2014 deeming it illegal. The measure was adopted through a fast legislative procedure and is supposed to enter into effect on 1 January 2020.
It seems like the main reason to introduce mandatory SIM card registration is that authorities want to localise the call to the emergency number and punish false emergency calls. However, this measure is not likely to be efficient for the purpose, as anyone who buys a SIM card could obviously give it to someone else. Another reason is to identify the caller in real emergency situations, to be able to more easily locate them and send help.
Romania is one of the few countries in the European Union where calling the emergency number without a SIM card is not possible. This has been a deliberate decision taken by Romanian authorities to limit the number of “non-urgent” calls.
After the Emergency Ordinance was proposed, EDRi member ApTI, together with two other Romanian NGOs, launched a petition to the Ombudsman and the government calling for this law not to be adopted. After civil society’s calls for a public debate, the Ministry of Communications organised an oral hearing in which the participants were given no more than five minutes to express their views, without the possibility to have an actual dialogue. The Emergency Ordinance was adopted shortly after the hearing, despite the fact that the Romanian Constitution explicitly states that laws which affect fundamental rights cannot be adopted by emergency ordinances (Article 115 of the Romanian Constitution).
What did the court say in 2014?
In 2014, the Constitutional Court held that the “retention and storage of data is an obvious limitation of the right to personal data protection and to the fundamental rights protected by the Constitution on personal and family privacy, secrecy of correspondence and freedom of speech” (para. 43 of Decision nr. 461/2014, unofficial translation). The Court explained that restricting fundamental rights is possible only if the measure is necessary in a democratic society. The measure must also be proportionate, and must be applicable without discrimination and without affecting the essence of the right or liberty.
Collecting and storing the personal data of all citizens who buy prepaid SIM cards for the mere reason of punishing those who might abusively call the emergency number seems like a bluntly disproportionate measure that unjustifiably limits the right to private life. At the same time, such a measure inverses the presumption of innocence and automatically assumes that all prepaid SIM card users are potentially guilty.
What’s the current status?
The Ombudsman listened to civil society’s concerns, and challenged the Ordinance at the Constitutional Court. Together with human rights NGO APADOR-CH, ApTI is preparing an amicus curiae to support the unconstitutionality claims.
In the meantime, the Ordinance moved on to parliamentary approval and the provisions related to mandatory SIM card registration were rejected in the Senate, the first chamber to debate the law. The Chamber of Deputies can still introduce modifications.
EU Commissioners are a powerful bunch: as the executive branch of the European Union – complementing the European Parliament and the Council of the European Union as legislators, and the Court of Justice of the European Union (CJEU) as judiciary – the College’s wide-ranging responsibilities cover EU policy, law, budget, and “political and strategic direction”. With digitalisation an issue that transcends borders, the choice of Commissioners could have an impact on digital rights across the world.
Between 30 September and 8 October 2019, the Commissioners-designate underwent marathon confirmation hearings in the European Parliament. These hearings give the EU elected representatives (Members of the European Parliament, MEPs) an opportunity, before voting on the Commissioners-designate, to ask them questions about their capacities and potential priorities if elected. Among the three that did not make the cut was France’s nominee for Internal Market, Sylvie Goulard, whose late-stage rejection may delay the start of the new Commission.
A shared task to update Europe for the digital age
Five of the incoming Commissioners’ portfolios are predicted to have a significant influence on digital policy. Carved up by President von der Leyen, their overlapping responsibilities to make Europe fit for the digital age could make or break citizens’ rights to privacy, data protection and online freedoms in general:
Sweden’s Ylva Johansson, Commissioner-designate for Home affairs, will inherit a portfolio including cybercrime, terrorist content Regulation and issues relating to privacy and surveillance. Whilst her hearing was relatively light on digital questions, it was certainly heavy in evasive answers. Her insistence on fundamental rights was a good start, but her call for compromise between security and privacy fell into the age-old myth of the two rights as mutually exclusive.
Belgium’s Didier Reynders, Commissioner-designate for Justice and Consumers, championed rights by committing to enforce the General Data Protection Regulation (GDPR) to its fullest extent. On Artificial Intelligence (AI) and data protection, he promised swift law, safety, trust, transparency, and for those making or judging the law to better understand the impacts of algorithmic decisions. He cited plans for a collective redress position in November.
The Czech Republic’s Věra Jourová (current Commissioner for Justice) made her case as Commissioner-designate for Values and transparency. Democracy, freedom of expression and cracking down on disinformation were key topics. Despite an understated performance, she called Europeans “the safest people on the planet.” She is right that GDPR sets a strong global standard, but it has faced a rocky implementation, and as of today requires further efforts to ensure the harmonisation that the Regulation prescribed.
Last was Denmark’s Margrethe Vestager for Executive Vice-President for a Europe fit for the digital age, and continuing as Competition Commissioner. Her anti-Big-Tech, “privacy-friendly”, pro-equality, redistribution agenda was well received. She faced questions about breaking up Big Tech, leaving it on the table as a “tool” of last resort but emphasising her desire to exhaust other avenues first. But she stumbled when it came to accusations that her aspirations to rein in Big Tech are incompatible with her remit as leader of the EU’s digital affairs.
The implications on digital policy
Throughout the hearings, the Commissioners-designate made many commitments, emphasised their policy priorities, and shared their plans for the future. Although we do not know exactly how this will translate to concrete policy, their hearings give valuable insight into how the new College intend to tackle rights challenges in the online environment. This is not an exact science, but we invite you to join us – and our “rightsometer” – to speculate about what impact the nominees’ ideas will have on citizens’ digital rights over the next five years, based on what the nominees did (and did not) say.
Privacy Key legislation: ePrivacy
The currently stalled ePrivacy Regulation was unsurprisingly raised by MEPs – and, reassuringly, Vestager shared that “passing ePrivacy” needs to be “a high priority”.
Result: with Vestager’s support, it is a cautiously optimistic 3/5 on the rightsometer – but the troubled history of the Regulation also warns us not to be too hopeful.
Platform power Key legislation: E-Commerce Directive (ECD), slated to be replaced by the upcoming Digital Services Act (DSA)
Vestager was the champion of regulating Big Tech throughout her hearing, proposing to redress the balance of power in favour of citizens, and giving consumers more choice about platforms. But she later confessed to uncertainty around the shape that the DSA will take, saying that she needs to “take stock” before committing to a position on E-Commerce. Jourová committed to redress in the event of wrongful takedown of content, and emphasised her strong support for the DSA. However, she suggested her intention to explore platform “responsibility” for illegal content, a move which would threaten myriad human rights.
Result: the rightsometer gives an inconclusive 2.5/5, with commitments to strengthening Big Tech regulation promising, but risks of unintended consequences of some of their ideas remaining a big concern.
Disinformation Key document: Code of Practice on Disinformation
Jourová committed to tackling the problem of online disinformation, promising to bring in codes of conduct for platforms; to make it clear where political advertisements come from, and by whom they are funded; as well as enforcing “rules” for political campaigning.
Result: it’s a positive 4/5, and we encourage Jourová to analyse the risks of targeted political advertising and the online tracking industry caused by dysfunctional business models. However, a cautionary approach is needed (see Access Now, EDRi and Liberties Guide on Disinformation).
Law enforcement and cross-border access to data Key legislation: “e-Evidence” proposal
Under direct questioning from MEP Moritz Körner about plans to advance e-Evidence, Commissioner-designate Johansson declined to provide a reply. She also insinuated that fundamental rights to encryption might be incompatible with fighting terrorism.
Result: e-Evidence makes for a pessimistic 0/5 on the rightsometer, with nothing to give confidence that this controversial proposal is being reassessed.
Artificial Intelligence (AI) Key legislation: none proposed yet – but both von der Leyen and Reynders promised “horizontal” legislation in 100 days
Jourová emphasised that fundamental rights in AI innovation will “ensure that our solutions put people first, and will be more sustainable as a result”. Vestager added that ethics will be at the heart of AI policy, and Reynders that Europe’s “added value” is in bringing protection for privacy and data to future AI legislation.
Result: a promising 4/5 on the rightsometer; we welcome the Commissioners’-designate focus on fundamental rights when implementing AI-based technologies.
Where does that leave our digital rights?
Disinformation, Artificial Intelligence, privacy, and mitigating platform power were all given substantive commitments by the Commissioners-designate. Protecting fundamental rights online was, thankfully, a persistent concern for all the nominees. Certain topics, such as “digital literacy” were mentioned, but not given any flesh, and nominees also declined to answer a number of “too specific” questions. Although there was lots about which we can be optimistic, the balance between rights and law enforcement or innovation means that we should stay cautious.
On 11 October 2019, EDRi, together with four other civil society organisations, sent an open letter to EU Member States, to urge to conclude the negotiations on the ePrivacy Regulation. The letter highlights the urgent need for a strong ePrivacy Regulation in order to tackle the problems created by the commercial surveillance business models, and expresses the deep concerns by the fact that the Member States, represented in the Council of the European Union, still have not made decisive progress, more than two and a half years since the Commission presented the proposal.
We, the undersigned organisations, urge you to swiftly reach an agreement in the Council of the European Union on the draft ePrivacy Regulation.
We are deeply concerned by the fact that, more than two and a half years since the Commission presented the proposal, the Council still has not made decisive progress. Meanwhile, one after another, privacy scandals are hitting the front pages, from issues around the exploitation of data in the political context, such as “Cambridge Analytica”, to the sharing of sensitive health data. In 2019, for example, an EDRi/CookieBot report demonstrated how EU governments unknowingly allow the ad tech industry to monitor citizens across public sector websites.1 An investigation by Privacy International revealed how popular websites about depression in France, Germany and the UK share user data with advertisers, data brokers and large tech companies, while some depression test websites leak answers and test results to third parties.2
A strong ePrivacy Regulation is necessary to tackle the problems created by the commercial surveillance business models. Those business models, which are built on tracking and cashing in on people’s most intimate moments, have taken over the internet and create incentives to promote disinformation, manipulation and illegal content.
What Europe gains with a strong ePrivacy Regulation
The reform of the current ePrivacy Directive is essential to strengthen – not weaken – individuals’ fundamental rights to privacy and confidentiality of communications.3 It is necessary to make current rules fit for the digital age.4 In addition, a strong and clear ePrivacy Regulation would push Europe’s global leadership in the creation of a healthy digital environment, providing strong protections for citizens, their fundamental rights and our societal values. All this is key for the EU to regain its digital sovereignty, one of the goals set out by Commission President-elect Ursula von der Leyen in her political guidelines.5
Far from being an obstacle to the development of new technologies and services, the ePrivacy Regulation is necessary to ensure a level playing field and legal certainty for market operators.6 It is an opportunity for businesses7 to innovate and invest in new, privacy-friendly, business models.
What Europe loses without a strong ePrivacy Regulation
Without the ePrivacy Regulation, Europe will continue living with an outdated Directive which is not being properly enforced8 and the completion of our legal framework initiated with the General Data Protection Regulation (GDPR) will not be achieved. Without a strong Regulation, surveillance-driven business models will be able to cement their dominant positions9 and continue posing serious risks to our democratic processes.1011 The EU also risks losing the position as global standard-setter and digital champion that it earned though the adoption of the GDPR.
As a result, people’s trust in internet services will continue to fall. According to the Special Eurobarometer Survey of June 2019 the majority of users believe that they only have partial control over the information they provide online, with 62% of them being concerned about it.
The ePrivacy Regulation is urgently needed
We expect the EU to protect people’s fundamental rights and interests against practices that undermine the security and confidentiality of their online communications and intrude in their private lives.
As you meet today to discuss the next steps of the reform, we urge you to finally reach an agreement to conclude the negotiations and deliver an upgraded and improved ePrivacy Regulation for individuals and businesses. We stand ready to support your work.
AccessNow The European Consumer Organisation (BEUC) European Digital Rights (EDRi) Privacy International Open Society European Policy Institute (OSEPI)
When the European Commission proposed to replace the outdated and improperly enforced 2002 ePrivacy Directive with a new ePrivacy Regulation in January 2017, it marked a cautiously hopeful moment for digital rights advocates across Europe. With the backdrop of the General Data Protection Regulation (GDPR), adopted in May 2018, Europe took a giant leap ahead for the protection of personal data. Yet by failing to adopt the only piece of legislation protecting the right to privacy and to the confidentiality of communications, the Council of the European Union seems to have prioritised private interests over the fundamental rights, securities and freedoms of citizens that would be protected by a strong ePrivacy Regulation.
This is not an abstract problem; commercial surveillance models – where businesses exploit user data as a key part of their business activity – pose a serious threat to our freedom to express ourselves without fear. This model relies on profiling, essentially putting people into the boxes in which the platforms believe they belong – which is a very slippery slope towards discrimination. And when children increasingly make up a large proportion of internet users, the risks become even more stark: their online actions could impact their access to opportunities in the future. Furthermore, these models are set up to profit from the mass sharing of content, and so platforms are perversely incentivised to promote sensationalist posts that could harm democracy (for example political disinformation).
It is high time that the Council of the European Union takes note of the risks to citizens caused by the current black hole where ePrivacy legislation should be. Amongst the doom and gloom, there are reasons to be optimistic. If delivered in its strongest form, an improved ePrivacy Regulation helps to complement the GDPR; will ensure compliance with essential principles such as privacy by design and by default; will tackle the perversive model of online tracking and the disinformation it creates; and it will give power back to citizens over their private life and interests. We urge the Council to swiftly update and adopt a strong, citizen-centered ePrivacy Regulation.
Representatives of the UK Home Department, US Attorney General, US Homeland Security and Australian Home Affairs have joined forces to issue an open letter to Mark Zuckerberg. In their letter of 4 October, they urge Facebook to halt plans for end-to-end (aka strong) encryption across Facebook’s messaging platforms, unless such plans include “a means for lawful access to the content of communications”. In other words, the signatories are requesting what security experts call a “backdoor” for law enforcement to circumvent legitimate encryption methods in order to access private communications.
The myth of weak encryption as safe
Whilst the US, UK and Australia are adamant that their position enhances the safety of citizens, there are many reasons to be skeptical of this. The open letter uses emotive language to emphasise the risk of “child sexual exploitation, terrorism and extortion” that the signatories claim is associated with strong encryption, but fails to give a balanced assessment which includes the risks to privacy, democracy and most business transactions of weak encryption. By positioning weak encryption as a “safety” measure, the US, UK and Australia imply (or even explicitly state) that supporters of strong encryption are supporting crime.
Government-led attacks on everybody’s digital safety aren’t new. Since the 1990s, the US has tried to prevent the export of strong encryption and—when that failed—worked on forcing software companies to build backdoors for the government. Those attempts were called the first “Cryptowars”.
In reality, however, arguing that encryption mostly helps criminals is like saying that vehicles should be banned and all knives blunt because both have been used by criminals and terrorists. Such reasoning ignores that in the huge majority of cases strong encryption greatly enhances people’s safety. From enabling secure online banking, to keeping citizens’ messages private, internet users and companies rely on strong encryption every single day. It is the foundation of trusted, secure digital infrastructure. Weak encryption, on the other hand, is like locking the front door of your home, only to leave the back one open. Police may be able to enter more easily – but so too can criminals.
Strong encryption is vital for protecting civil rights
However, it is worth remembering that Facebook’s announcement to encrypt some user content is so far just that: an announcement. The advertisement company’s approach to privacy is a supreme example of surveillance capitalism: protecting some users when it is favourable for their PR, and exploiting user data when there is a financial incentive to do so. To best protect citizens’ rights, we need a concerted effort between policy-makers and civil society to enact laws and build better technology so that neither our governments nor social media platforms can exploit us and our personal data.
The bottom line
Facebook must refuse to build anything that could constitute a backdoor into their messaging platforms. Otherwise, Facebook is handing the US, UK and Australian governments a surveillance-shaped skeleton key that puts Facebook users at risk worldwide. And once that door is unlocked, there will be no way to control who will enter.
The EU Directive imposing the collection of flyers’ information (Passenger Name Record, PNR) was adopted in April 2016, the same day as the General Data Protection Regulation (GDPR). The collection of PNR data from all flights going in and out of Brussels has a strong impact on the right of privacy of individuals and it needs to be justified on the basis of necessity and proportionality, and only if it meets objectives of general interest. All of this lacks in the current EU PNR Directive, which is at the moment being implemented in the EU.
The Austrian implementation of the PNR Directive
In Austria, the Austrian Passenger Information Unit (PIU) has processed PNR since March 2019. On 9 July 2019, the Passenger Data central office (Fluggastdatenzentralstelle) issued a response to inquiries into PNR implementation in Austria. According to the document, from February 2019 to 14 May, 7 633 867 records had been transmitted to the PIU. On average, about 490 hits per day are reported, with an average of about 3 430 hits per week requiring further verification. According to the document, out of the 7 633 867 reported records, there were 51 confirmed matches and in 30 cases there was the intervention by staff at the airport concerned.
Impact on innocents
What this small show of success does not capture, however, is the damage inflicted on the thousands of innocent passengers who are wrongly flagged by the system and who can be subjected to damaging police investigations or denied entry into destination countries without proper cause. Mass surveillance that seeks a small, select population is invasive, inefficient, and counter to fundamental rights. It subjects the majority of people to extreme security measures that are not only ineffective at catching terrorists and criminals, but that undermine privacy rights and can cause immense personal damage.
Why is this happening? The rate fallacy
Imagine a city with a population of 1 000 000 people implements surveillance measures to catch terrorists. This particular surveillance system has a failure rate of 1%, meaning that (1) when a terrorist is detected, the system will register it as a hit 99% of the time, and fail to do so 1% of the time and (2) that when a non-terrorist is detected, the system will not flag them 99% of the time, but register the person as a hit 1% of the time. What is the probability that a person flagged by this system is actually a terrorist?
At first, it might look like there is a 99% chance of that person being a terrorist. Given the system’s failure rate of 1%, this prediction seems to make sense. However, this is an example of incorrect intuitive reasoning because it fails to take into account the error rate of hit detection.
This is based on the rate fallacy: The base rate fallacy is the tendency to ignore base rates – actual probabilities – in the presence of specific, individuating information. Rather than integrating general information and statistics with information about an individual case, the mind tends to ignore the former and focus on the latter. One type of base rate fallacy is the one we suggested above called the false positive paradox, in which false positive tests are more probable than true positive tests. This result occurs when the population overall has a low incidence of a given condition and the true incidence rate of the condition is lower than the false positive rate. Deconstructing the false positive paradox shows that the true chance of this person being a terrorist is closer to 1% than to 99%.
In our example, out of one million inhabitants, there would be 999 900 law-abiding citizens and 100 terrorists. The number of true positives registered by the city’s surveillance numbers 99, with the number of false positives at 9 999 – a number that would overwhelm even the best system. In all, 10 098 people total – 9 999 non-terrorists and 99 actual terrorists – will trigger the system. This means that, due to the high number of false positives, the probability that the system registers a terrorist is not 99% but rather is below 1%. Searching in large data sets for few suspects means that only a small number of hits will ever be genuine. This is a persistent mathematical problem that cannot be avoided, even with improved accuracy.
Security and privacy are not incompatible – rather there is a necessary balance that must be determined by a society. The PNR system, by relying on faulty mathematical assumptions, ensures that neither security nor privacy are protected.
Today’s children have the most complex digital footprint in human history, with their data being collected by private companies and governments alike.
The consequences on a child’s future revolve around one’s freedom to learn from mistakes, the reputation damage caused by past mistakes, and the traumatic effects of discriminatory algorithms.
Summer is that time of the year when parents get to spend more time with their children. Often enough, this also means children get to spend more time with electronic devices, their own or their parents’. Taking a selfie with the little one, or keeping them busy with a Facebook game or a Youtube animations playlist – these are examples that make the digital footprint of today’s child the largest in human history.
Who wants your child’s data?
Mobile phones, tablets and other electronic devices can open the door for the exploitation of the data about the person using that device – how old they are, what race they are, where are they located, what websites they visit etc. Often enough, that person is a child. But who would want a child’s data?
Companies that develop “smart” toys are the first example. In the past year, they’ve been in the spotlight for excessively collecting, storing and mis-handling minors’ data. Perhaps you still remember the notorious case of “My Friend Cayla”, the “smart” doll that was proved to record the conversations between it and children, and share them with advertisers. In fact, the doll was banned in Germany as an illegal “hidden espionage device”. However, the list of “smart” technologies collecting children data is long. Another example of a private company mistreating children’s data was the case of Google offering its school products to young American students and tracking them across their different (home) devices to train other Google products. A German DPA (Data Protection Authority) decided to ban Microsoft Office 365 from schools over privacy concerns.
Besides private companies, state authorities have an interest to record, store and use children’s online activity. For example, a Big Brother Watch 2018 report points that in the United Kingdom “Department for Education (DfE) demands a huge volume of data about individual children from state funded schools and nurseries, three times every year in the School Census, and other annual surveys.” Data collected by schools (child’s name, birth date, ethnicity, school performance, special educational needs and so on) is combined with social media profile or other data (e.g household data) bought from data brokers. Why linking all these records? Local authorities wish to focus more on training algorithms that predict children’s behaviour in order to identify “certain” children prone to gang affiliations or political radicalisation.
Consequences for a child’s future
Today’s children have the biggest digital footprint out of all humans in human history. Sometimes, the collection of a child’s data starts even before they are born, and this data will increasingly determine their future. What does this mean for kids’ development and their life choices?
The extensive data collection of today’s children aims at neutralising behavioural “errors” and optimising their performance. But mistakes are valuable during a child’s self-development – committing errors and learning lessons is an important complementary to receiving knowledge from adults. In fact, a recent psychology study shows that failure to provide an answer to a test is benefiting the learning process. Constantly using algorithms to optimise performance based on a child’s digital footprint will damage the child’s right to make and learn from mistakes.
A child’s mistakes are not only a source of important lessons. With a rising number of attacks targeted at school’s IT systems, children’s data can get in the wrong hands. Silly mistakes could also be used to damage the reputation of the future adult a child grows into. Some mistakes must be forgotten. However, logging every step in a child’s development increases the risk that the past mistakes are later used against them.
More, children’s data can contribute to them being discriminated against. As mentioned above, data is used to predict child behaviour, with authorities aiming to intervene where they consider necessary. But algorithms portray human biases, for example against people of colour. What happens when a child of colour is predicted to be at risk of gang affiliation? Reports show that authorities treat children in danger to be recruited by a gang as if they were part of the gang already. Therefore, racial profiling by algorithms can turn into a traumatic experience for a child.
EDRi is actively trying to protect you and your beloved ones
Digital Rights is a network of 42 organisations that promote the
respect of privacy and other human rights online.
Our free “Digital Defenders” booklet for children (available in many languages) teaches in a fun and practical way why and how to protect our privacy online. EDRi is also working on the ongoing reform of the online privacy (ePrivacy) rules. This reform has a great potential to diminish practices of data exploitation online.
Civil Society advocates from Russia, and Central and Eastern Europe have joined forces to form a new inter-regional NGO to promote privacy in countries bordering the EU.
The initiative also involves activists from the Post-Soviet countries, the Balkans and the EU Accession candidate countries. One of its primary objectives is to build coalitions and campaigns in countries that have weak or non-existing privacy protections. The project emerged from a three-day regional privacy workshop held earlier in 2019 at the Nordic Non-violence Study Group (NORNONS) centre in Sweden. The workshop agreed that public awareness of privacy in the countries represented was at a dangerously poor level, and concluded that better collaboration between advocates is one solution.
There has been a pressing need for such an alliance for many years. A vast arc of countries from Russia through Western Asia and into the Balkans has been largely overlooked by international NGOs and intergovernmental organisations (IGOs) concerned with privacy and surveillance.
The initiative was convened by Simon Davies, founder of EDRi member Privacy International and the Big Brother Awards. He warned that government surveillance and abuse of personal information has become endemic in many of those countries:
“There is an urgency to our project. The citizens of places like Azerbaijan, Kazakhstan, Kyrgyzstan, Turkmenistan, and Armenia are exposed to wholesale privacy invasion, and we have little knowledge of what’s going on there. Many of these countries have no visibility in international networks. Most have little genuine civil society, and their governments engage in rampant surveillance. Where there is privacy law, it is usually an illusion. This situation applies even in Russia.”
A Working Group has been formed involving advocates from Russia, Serbia, Georgia, Ukraine and Belarus, and its membership includes Danilo Krivokapić from EDRi member SHARE foundation in Serbia. The role of this group is to steer the legal foundation of the initiative and to approve a formal Constitution.
The initiative’s Moderator is the former Ombudsman of Georgia, Ucha Nanuashvili. He too believes that the new NGO will fill a desperately needed void in privacy activism:
“In my view, regions outside the EU need this initiative. Privacy is an issue that is becoming more prominent, and yet there is very little regional collaboration and representation. Particularly in the former Soviet states there’s an urgent need for an initiative that brings together advocates and experts in a strong alliance.”
Seed funding for the project has been provided by the Public Voice Fund of the Electronic Privacy Information Center (EPIC). EPIC’s president, Marc Rotenberg, welcomed the initiative and said he believed it would “contribute substantially” to the global privacy movement:
“We have been aware for some time that there is a dangerous void around privacy protection in those regions. We appreciate the good work of NGOs and academics to undertake this important collaboration.”
The Working Group hopes to formally launch the NGO in October in Albania. The group is presently considering several options for a name. Anyone interested in supporting the work of the initiative or wanting more information can contact Simon Davies at simon <at> privacysurgeon <dot> org.