The Online Safety Bill: punishing victims
The government has today announced two new regressive and unworkable additions to the Online Safety Bill. With each new announcement, the Bill demonstrates itself to make the online world less safe for the people it claims to protect, particularly LGBTQ+, survivors of abuse and ethnic minorities.
Crackdown on anonymity
The government claims that abuse is ‘thought’ to be linked to anonymity but in practice, most online abuse is done by very identifiable people, who simply believe they are entitled to their right to talk down, threaten or humiliate. Even when users are anonymous, police data requests are already capable of stripping away this anonymity by linking their IP addresses to their home addresses via a match from their internet service provider.
More importantly, for victims of abuse, anonymity is necessary and the only way they can access the online world while staying safe.
LGBTQ+ people frequently use anonymity to shield themselves from the real world abuse and prejudice, they may otherwise face from family, religious or community members.
Survivors of physical or sexual abuse use anonymity to reduce the possibility of engaging their abusers. Others, from trades unionists to whistle blowers, use anonymity to separate their opinions and beliefs from those who would seek to do them harm for holding those views.
It is sad to imagine how the example of the British government destroying anonymity will be used by Putin and his friends. It is easy to envisage how dangerous this proposal is in that international context.
Of course, proponents of this measure will say that user identification is optional and anonymous accounts will still be allowed. However, these accounts will only be able to interact with other unverified accounts, or those that haven’t blocked unverified users. In other words, if you are vulnerable, you will be restricted, judged and labelled as a potential abuser.
Crackdown on ‘legal but harmful’ content
The government is also claiming that adult users will be able to opt out of legal but harmful content allowed on major platforms, such as anti-vax posts, racist abuse or promotion of eating disorders. Precedent, however, shows that such content cannot be accurately identified and blocked.
Algorithms struggle hugely with identification of context dependent information. How will an algorithm know whether a post about the Taliban is promoting peace or violence; or if a discussion on torture is political or depraved?
Errors in filters and content moderating algorithms have a tendency to discriminate against the people that these proposals are designed to protect. It is well known that LGBTQ+ content is routinely identified by machines as potentially sexual in nature and blocked. Advice about sex, drugs and alcohol similarly ends up mis-classified and taken down. None of this will make adults safer.
Minority language speakers are also likely to suffer incorrect blocks; examples have surfaced of algorithms identifying content as terror related for mere mentions of the wrong groups or phrases and when activists document human rights abuses.
Ministers often display a misplaced faith in technology to solve such problems. However, this is a problem that cannot be properly solved by technology.
Of course, most adults will not use broken content filtering, because it is not a useful tool. And very few platforms allow apparently dangerous material, so won’t be designing optional filters for such content.
What this policy is really designed to achieve, Open Rights Group concludes, is not safety but headlines.
This article was first published here.
Image Credits: Licence: Image by Chris Slupski under a CC-By-2.0 licence
(Contribution by: Jim Killock, Executive Director, EDRi member, Open Rights Group)