The EU’s home affairs chief wants to read your private messages

The CSA Regulation, proposed by European Commissioner Ylva Johansson, could undermine the trust we have in secure and confidential processes like sending work emails, communicating with our doctors, and even governments protecting intelligence.

By EDRi · March 29, 2023

A new law is threatening the privacy of the European Union’s 447 million inhabitants.

The CSA Regulation, proposed by European Commissioner Ylva Johansson, could undermine the trust we have in secure and confidential processes like sending work emails, communicating with our doctors, and even governments protecting intelligence.

The regulation wants to routinely scan our private communications online using AI tools to look for the spread of child sexual abuse material.

It does not matter if you are suspected of a crime or not, this scanning could include everyone — hundreds of millions of law-abiding European residents.

This EU home affairs law does not just propose to scan the words that we type. It also wants to scan the personal pictures on our phones, the documents on our clouds, and the contents of our emails.

All the ways that we live our lives online, including a lot of deeply personal information, could be subject to regular digital searches.

Having anyone’s legitimate conversations monitored will harm everyone, especially children. Experts show that no one will be protected by making the internet less secure.

Mass surveillance online does not make us safer, it erodes our democratic rights and freedoms.

Do you know know that AI-based tools are fundamentally discriminatory?

Research confirms that AI systems perpetuate discrimination. We see men of colour flagged as suspicious when they’re not doing anything wrong. Women’s and girls’ bodies and LGBTQ+ people are over-censored.

These technologies entrench structural racism, sexism, homophobia and inequality, meaning that certain people are over-targeted whilst others are erased.

How can we trust this inherently faulty technology with such a sensitive issue as our children’s safety online?

Despite what the name suggests, AI tools aren’t even particularly intelligent — at least not in the way that we commonly think of intelligence.

They make mistakes that even a small child would not make. That does not mean that they cannot be useful, but we must be very careful about when it is — and is not — appropriate to use them.

"AI detection inevitably flags a lot of innocent material. How do we know? Because these AI-based false reports are already happening."

Ella Jakubowska, Senior Policy Advisor, EDRi

Under the new proposal, these biased, unreliable AI tools would predict whose messages, pictures or uploads contain child abuse.

Based on what we know about AI and discrimination, it is likely that a Black man or a queer person, for example, would be more likely to be wrongfully flagged as a suspect and reported to the authorities.

AI detection inevitably flags a lot of innocent material. Cherished pictures of families on the beach or a snap of the kids in the bath sent to grandma.

A selfie which was uploaded to your personal cloud. A message from a teenager to their older cousin asking for advice.

None of this will be private any more.

How do we know? Because these AI-based false reports are already happening — and at rates much higher than the new law claims.

Guilty until proven otherwise?

According to the draft law, if you choose to use online apps or platforms that respect your privacy and personal data, it is more likely that your private communications will be routinely scanned.

“Why would you want to protect your personal messages if you don’t have anything to hide” is the line of logic of the EU’s Home Affairs unit.

"The EU cannot breach our digital private lives just in case we are doing something wrong."

Ella Jakubowska, Senior Policy Advisor, EDRi

In a time of surveillance advertising, mass abuses of our personal data, and ‘Big Tech’ platforms that hold more power than some governments, choosing chat and email providers that respect our privacy is the only smart choice.

Encrypted chat apps, for example, are one of the few tools we actually have that help us stay safe online.

That’s why downloads of the secure messenger app Signal increased ten-fold following Russia’s invasion of Ukraine.

The EU cannot breach our digital private lives “just in case” we are doing something wrong. This violates the most basic principles of how we organise as a society.

Can surveillance actually protect children?

The EU’s Home Affairs chief, Ylva Johansson, says that this proposal is the only way that the EU can keep children safe online.

Yet both the United Nations and UNICEF have already repeatedly warned against the generalised surveillance of young people’s internet use, confirming that it is harmful to children.

The EU’s data protection supervisor cautions that this law would put almost all EU internet users at risk, with very little evidence that it will stop child abuse.

"We need to do more to tackle those individuals through ... child protection specialists, prevention, justice, education and better online reporting and user empowerment tools."

Ella Jakubowska, Senior Policy Advisor, EDRi

And in a landmark new survey, 80% of young people have stated that if their communications were routinely scanned, they would no longer feel safe being politically active or exploring their sexuality.

As anti-trafficking technology expert Anjana Rajan warns, secure communications tools are mostly used for legitimate purposes.

The fact that a minority abuse them doesn’t mean we should put everyone at greater risk.

It means we need to do more to tackle those individuals through investments in child protection specialists, prevention, justice, education and better online reporting and user empowerment tools.

That’s where the EU should be focusing its efforts.

The article was first published by Euronews here.

Ella Jakubowska

Senior Policy Advisor, EDRi

Twitter: @ellajakubowska1