Does Google accuse you of child abuse? Impossible! Right?

The legislator in Europe is working on a proposal that could force companies to scan all messages we exchange for child sexual abuse material. The goal is noble but it can very easily go wrong. And if things go wrong, you might suddenly be accused of sexually abusing children.

By Bits of Freedom (guest author) · September 28, 2022

All photos safe in the cloud 

The New York Times reported a month ago about a father in the U.S. whose child was sick. The boy’s phallus was swollen and painful. Their general practitioner asked the father to send pictures of the boy’s genitalia. The father took these pictures with his phone. The phone was set up to automatically send each photo to Google’s cloud as well. Handy, in case he lost his phone. But Google’s computers use artificial intelligence that analyses all users’ photos. If there is possible sexual abuse of children, all alarm bells go off. And that happened here too, along with all its consequences. Google deleted the father’s account and alerted the police. 

Humans as “fail safe”

Whoops. There was no child abuse at all in this story. But because the photos are evaluated automatically, such errors are unavoidable. This is partly because computers are much less able to assess the context in which such a photo was taken as compared to humans. That is why policymakers who want to use this type of technology often propose a “human fail safe”. Before taking any action, a person not a machine must have first looked at it. But that’s a tough “solution” to fix a system that we can’t get to work flawlessly. Because when you think about it: that way of checking means asking ordinary people at ordinary companies to look into child abuse. These are not people who are trained for this, or who receive the much-needed psychological support for it, like the people working at the special victims’ department of the police do. And to make things worse, these types of companies have a particularly bad reputation when it comes to supporting moderators. 

Mistakes continue to occur 

In the Netherlands, viewing images of child sexual abuse is punishable by law. So the new law the European Commission is developing to tackle the spread of child sexual abuse material online would force companies to hire employees to do something that by Dutch law is fundamentally illegal. In response, policymakers came up with a new proposal: an independent body will be set up to assess such images. Companies would forward your photos to them without seeing them. But that practice is also deeply problematic because then the confidentiality of your photos is in even more peril. What parent would like to see sensitive photos of their child shared even further?

In the eyes of some policymakers, a “human fail safe” could fix the problem of computer errors. The New York Times story shows that human control at Google did not work so well. Google reported its user to the police after its human check. And even after the police confirmed that nothing illegal had happened, Google stuck to its own truth. 

False accusation 

Take a moment to think about that. A major tech company is accusing you (wrongly!) of child sexual abuse. Very troubling. And then they report you to the police. Even more troubling. The police then start an investigation into you. For the father from the New York Times story, it meant that the police requested all available data from Google, such as his location data, search history, photos and files. That may be understandable, but it is just as troubling. And that father, ironically, is a developer of the software that labelled him a criminal and probably knows how to fight back relatively well. But the vast majority of the population is less resilient and less familiar with the small print… 

The approach to handling the situation discussed above may very well be dismissed as a voluntary action by a single tech company. Or an incident. But it isn’t. There are more examples, also with Dutch users.

Europe wants to standardise this 

But there is more to consider. The European legislator is now discussing a bill that could force companies to scan all messages from all customers for child sexual abuse materials. They now want to enforce a practice that has already been proven ineffective and harmful in the case of Google and other technology companies. So we’re going to see stories like this one more often. 

It is even more concerning that the European Commission’s proposal endangers confidentiality on the internet. It wants technology companies to monitor communications that right now are still protected with end-to-end encryption. But the confidentiality of communication is indispensable for everyone, including children and victims of sexual abuse. The proposal must therefore be dropped. Bits of Freedom and European partners are working very hard to achieve exactly that. Will you help us? Donate now! 

The article was first published by Bits of Freedom in Dutch here. Thanks to Celeste Vervoort and Philip Westbroek for the English translation.

Contribution by: Rejo Zenger, Policy Advisor, EDRi member Bits of Freedom