05 Jun 2019

Our dependency on Facebook – life-threatening?

By Bits of Freedom

What is your priority when a terrorist attack or a natural disaster takes place close to where your parents live or where your friend went on holidays? Obviously, you would immediately like to know how your loved ones are doing. You will call and text them until you get in touch.

Or, imagine that you happen to be close to an attack yourself. You have little or no information, and you see a person with weapons running down the road. You would urgently call the police, right? You try to call, but it isn’t possible to connect to the mobile network. Your apps are not working either. You can’t inform your loved ones, you can’t find information about what’s going on, and you can’t call the police. Right at the time that communication and knowledge are vital, you can’t actually do anything. Afterwards, it appears that the telecom providers switched off their mobile networks directly after the attack, obeying police orders. This measure was necessary for safety, because it was suspected that the perpetrators were using the mobile network.

This scenario isn’t that far-fetched. A few years ago the telephone network in the San Francisco underground was partially disconnected. The operator of the metro network wanted to disrupt the demonstration against police violence after such a protest disturbed the timetable. The intervention was considered justified based on the safety of passengers. As a consequence of the previous demonstrations, the platforms had become overcrowded with passengers that couldn’t continue their journeys. However, the intervention was harshly criticised as the deactivation of the phone network had endangered the passengers – because, how do you, for example, alert the emergency services in an emergency situation when nobody’s phone is working?

Immediately after the terrorist attacks in Sri Lanka in April 2019, the government did something similar: it made services like Facebook unavailable, to avoid that the flow of speculations spread through platforms like Facebook would worsen the chaos.

In Sri Lanka, Facebook is practically a synonym for “the internet” – it’s the main communication platform in the country where the practice of zero-rating flourishes. As a result of Facebook’s dominance, contents that are published on the platform can very quickly have an enormous reach. And, it is exactly the posts that capitalise fear, discontentment, and anger that have a huge potential to go viral, whether they are true or not. Facebook in itself doesn’t have an incentive to limit the impact of these posts. On the contrary: the most extreme messages are contributing to the addictive nature of the social network. The posts themselves aren’t a threat to people’s physical safety, but in the context of terrorist attacks, they can be lethal.

The distribution of false information is apparently such a huge problem that the Sri Lankan government has no other option than to disconnect the main communication platform in the country. It’s a decision with far-reaching consequences: people are being isolated from their main source of information and from the only communication tool to reach their family and friends. We find ourselves in a situation in which the harmful side-effects of such a platform are perceived to be bigger than the gigantic importance of open communication channels and provision of information – rather no communication than Facebook-communication.

This shows how dangerous it is when a society is so dependent on one online platform. This dependency also makes it easier for a government to gain control by denying access to that platform. The real challenge is to ensure a large diversity of news sources and means of communication. In the era of information, dependency on one dominant source of information can be life-threatening.

This article was first published at https://www.bitsoffreedom.nl/2019/05/29/life-threatening-our-dependency-on-facebook/

Life-threatening: Our dependency on Facebook (only in Dutch, 06.05.2019)

BART Pulls a Mubarak in San Francisco (12.08.2011)

Social media temporarily blocked (21.04.2019)

Sri Lanka blocks social media, fearing more violence (21.04.2019)

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands; translation from Dutch to English by Bits of Freedom volunteers Winnie van Nunen and Amber Balhuizen)

22 May 2019

Hey Google, where does the path lead?

By Bits of Freedom

If you do not know the directions to a certain place, you use a digital device to find your way. With our noses glued to the screen, we blindly follow the instructions of Google Maps, or its competitor. But do you know which way you are being led?

Mobility is a social issue

Mobility is an ongoing debate in the Netherlands. Amsterdam is at a loss on how to deal with the large cars on the narrow canals, and smaller municipalities such as Hoogeveen is constructing a beltway to offset the Hollandscheveld area. Governors want to direct the traffic on the roads and as a result, they deliberately send us either right or left.

If all is well, all societal interest are weighed in on that decision. If one finds that the fragile village centre should be offset, the road signs in the berm direct the drivers around it. If the local authorities want to prevent cars from rushing past an elementary school, the cars are being routed through a different path.

Being led by commercial interests

However, we are not only being led by societal interests. More and more, we use navigation systems to move from place A to place B. Those systems are being developed by an increasingly smaller group of companies, of which Google seems to be the frontrunner. Nowadays, hardly anyone navigates using a map and the traffic signs on the side of the road. We only listen to the instructions from the computer on the dashboard.

In this way, a commercial enterprise determines which route we take – and it has other interests than the local authorities. It wants to service its customers in the best possible way. But who are these customers? For some companies, that’s the road users, but for others – often those where the navigation is free for the users – the customers that really matter are the invisible advertisers.

Too much of a short cut

And even that is too limited of course. Because which consideration the developer of the navigation system really takes is rarely transparent. When asking Google for a route from the Westerpark to the Oosterpark in Amsterdam, it leads you around the canal belt, instead of through it. That doesn’t seem to be the shortest route for someone on a bicycle.

Why would that be? Maybe Google’s algorithm is optimised for the straight street patterns of San Francisco and it’s unable to work with the erratic nature of the Amsterdam canals. Maybe it’s the fastest route available. Or maybe it’s a very conscious design choice so that the step-by-step description of the route does not become too long. Another possibility is that the residents of the canal belt are sick of the daily flood of cycling tourists and have asked Google, or maybe paid for it, to keep the tourists out of the canal belt. We simply don’t know.

Being misled

Incidentally, the latter-mentioned reason is less far-fetched than you would think at first. When you are in Los Angeles, you can’t miss the letters of the Hollywood Sign. A lot of tourists want to take a picture with it. Those living on the hill underneath the monumental letters are sick of it. They have, sometimes illegally, placed signs on the side of the road that state that the huge letters are not accessible through their street.

With the rise of digital maps that action became less and less successful. Pressurised by a municipal councillor, Google and Garmin, a tech company specialising in GPS technology, adjusted their maps so that tourists are not being led to the actual letters, but to a vantage point with a view of the letters. Both mapmakers changed their service under pressure of an effectively lobbied councillor.

Serving a different interest

It’s very rarely transparent which interests companies are taking into consideration. We don’t know which decisions those companies take and on which underlying data and rules they are based. We don’t know by whom they are being influenced. We can easily assume that the interests of such companies are not always compatible with public interests. This has a major impact on the local situation. If a company like Google listens to retailers, but not residents, the latter will be disadvantaged. The number of cars in and around the shopping streets is growing – which sucks, if you happen to live there. And even more so, if the local authorities do try to route the cars differently.

Again, this is another good example of how the designer of a technology impacts the freedom of the user of the technology. It also impacts society as a whole: we lose the autonomy to shape our living environment with a locally elected administration.

Moreover, this story is not only about the calculated route, but also about the entire interface of the software. The Belgian scientist Tias Guns described that very aptly: “There is, for example, an option to avoid highways, but an option to avoid local roads is not included.” As a driver, try and spare the local neighbourhood then.

The platform as a “dead end”

Adding to that – ironically – is that the major platforms are not always reachable. Where do you have to turn to if you want Google Maps to route less traffic through your street? Or, actually more, if you are a retailer? On a local level, this is different. There is a counter at the city hall where you can go, and there is a city council where you can put traffic problems on the agenda. This, by itself, is already very difficult to coordinate. The Chief Technology Officer of the city Amsterdam recently told in an interview about the use of artificial intelligence in the local authority:

“In some areas, residents have a larger capability to complain. Think of the city centre or the ‘Oud-Zuid’ area both more affluent areas and home to a large number of lawyers. It’s general knowledge that in those areas a complaint is made far easier than, for example, the less affluent area of Amsterdam “Noord”. This is not difficult for trained operators. They can handle experienced grumblers, and can judge for themselves whether the complaint is valid. A computer can not.”

Another issue is that some digital mapmakers are so large – and will continue to grow – that they can afford to listen selectively.

Who determines the path?

So, who decides how our public space is being used? Is that a local city council or a commercial enterprise? This makes quite a difference. In the first case, citizens can participate, decisions are made democratically, and there is a certain amount of transparency. In the second case, you have no information on why you were led left or right, or why shopping streets have become desolate overnight. Most likely the rule is: whoever pays, gets to decide. The growing power of commercial enterprises in the issue of mobility is threatening to put local administrations – and with that us, the citizens and small companies – out of play.

Bits of Freedom

Hey Google, which way are we being led? (15.05.2019)

Hey Google, which way are we being led? (in Dutch, 15.05.2019)

Why people keep trying to erase the Hollywood sign from Google Maps (21.11.2014)

(Contribution by Rejo Zenger, EDRi member Bits of Freedom, the Netherlands; translation from Dutch to English by Bits of Freedom volunteers Alex Leering and Amber Balhuizen)

22 May 2019

Facebook lies to Dutch Parliament about election manipulation

By Bits of Freedom

On 15 May 2019, Facebook’s Head of Public Policy for the Netherlands spoke at a round table in the House of Representatives about data and democracy. The Facebook employee reassured members of parliament that Facebook has implemented measures to prevent election manipulation. He stated: “You can now only advertise political messages in a country, if you’re a resident of that country.” Nothing seems to be further from the truth.

Dutch EDRi member Bits of Freedom wanted to know if it were possible to target Dutch voters from a foreign country, using the type of post and method of advertising that were employed in, among others, the “Leave” campaign in the UK. From Germany, they logged in to a German Facebook account, created a new page, and uploaded a well-known Dutch political meme. They then paid to have it shown to Dutch voters and settled the bill using a German bank account. Contrary to what Facebook led members of parliament to believe, there was nothing that stood in their way of doing so.

The other way around was just as easy. Facebook failed to stop Bits of Freedom from targeting German voters interested in German political parties Christian Democratic Union of Germany (CDU) and Alternative for Germany (AfD) with a CDU/AfD meme, even though they were using a Dutch Facebook account, had signed in from the Netherlands, and payed for the ad with a Dutch bank account. Better yet, Facebook suggested to add to their demographic, people with the additional interests “nationalism” and “military”. Thanks, Facebook!

We’re not dealing with a company that occasionally messes up. Facebook has time and time again exhibited a complete disregard for our democracy, freedom of expression, and privacy. Therefore, Bits of Freedom called on the House of Representatives to take action. On 20 May, on the Dutch current affairs television program “Nieuwsuur”, Labour Party (PvdA) leader Lodewijk Asscher responded: “Facebook promises to do better, and time and time again their promises prove worthless. Facebook says all the right things but in reality is a threat to democracy.” Liberal MP Kees Verhoeven (D66) added: “As far as I’m concerned, now is the time we stop relying on self-regulation and trusting companies’ promises, and start regulating.”

This article was first published at https://www.bitsoffreedom.nl/2019/05/21/facebook-lies-to-dutch-parliament-about-election-manipulation/

Bits of Freedom

Nieuwsuur: Facebook lies about political advertising (20.05.2019)

Bits of Freedom

Nieuwsuur: Facebook lies about political advertising (only in Dutch, 20.05.2019)

Steps you can take to minimise the political ads you see online (19.05.2019)

(Contribution by Evelyn Austin, EDRi member Bits of Freedom, the Netherlands)

08 May 2019

It starts with free Pokémon Go, it ends with Bolsonaro

By Bits of Freedom

Chile was the first country in the world to have a net neutrality law, but it is not enforced at all. A simple search across mobile internet providers shows a large offer of “free” data if you’re using platforms such as Facebook, Twitter, Instagram, Spotify, or Pokémon Go. This is called “zero-rating” and means people don’t have to pay for using some services like they would for others. It’s a violation of net neutrality.

These perks are crucial in the decision of millions of prepaid phone users who need to optimise their top-ups. This has led to a class divide where those with the economic means have access to the unlimited options of the internet, while those who need to be mindful of their expenses are constrained to the services of big tech corporations.

Class is central in the discussion about net neutrality. Supporters of net neutrality claim that without this regulatory framework users would have differentiated packages according to their economic means, and consequently there would be a first and a second-class internet. Those in favor of zero rating – and against net neutrality – refer to the same class divide, but now as an argument towards mitigating the cost of data plans for those in economic need.

Nobody wants to be the villain who opposes free Pokémon Go.

But the dystopia of corporations that are permitted to offer their services zero-rated doesn’t end there. The profound social infiltration of services owned by Mark Zuckerberg has led to scenarios in which entire communities rely on Whatsapp groups or Facebook fan pages as sources of information. Do you see where I’m going? Cambridge Analytica, anyone?

On 1 January 2019, Jair Bolsonaro became the president of Brazil. He is a right-wing politician who is in favour of torture, of the destruction of the Amazon rainforests, and of the criminalisation of homosexuality. The Guardian prepared a piece on how Whatsapp, a service used by 120 million Brazilians, proved to be a very effective tool to mobilise support for Bolsonaro. Whatsapp was used to promote his fascist promises, harass users who questioned these proposals and, of course, to send out big shipments of fake news.

According to the information that circulated in these Whatsapp groups, Bolsonaro’s opponent wanted to legalise pedophilia and incest, and his rival party was preparing a mandatory “gay kit” for 6-year-olds in Brazil’s public schools. For sure it is very easy to discard this as fake news if you have unlimited internet access to fact-check, or if you have a support system of informed people who will tell you the truth. But what happens when you’re restricted to a single-platform ecosystem where those calling out fake news are harassed, and where support for it is amplified by likes and social acceptance?

The relation between zero rating and the spreading of fascism might at first sight seem very distant. However, a closer look at the human motivations and interactions that take place in the virtual “free” spaces, and the economic interests of tech business, reveals a systematic information attack on the most vulnerable users of mobile internet who are forced to inhabit these environments of digital garbage.

Advocates and policymakers, who are well-versed in internet topics and hold the privilege of accessing secure and legitimate communications and information channels, can choose to blame the users of these services. They can choose to expect people to ignore fake news and spend their limited megabytes on the interactive visualisations of the New York Times instead of on Facebook with the people they know. However, it will be much more fruitful to work on strategies that guarantee a strict enforcement of net neutrality – including a ban on zero rating – through an interdisciplinary approach that includes community, tech and regulatory work.

It is important to fight for a vision where, at least in theory, all of us, regardless of our economic situation, are able to access and participate in an ecosystem of truthful information and open collaboration. We cannot abandon those with fewer means to the digital junk content that promotes fascism and generates toxic revenue for the big internet platforms.

This article was first published at https://www.bitsoffreedom.nl/2019/04/29/it-starts-with-free-pokemon-go-it-ends-with-bolsonaro/.

Zero rating: Why it is dangerous for our rights and freedoms (22.06.2016)

Two years of net neutrality in Europe – 31 NGOs urge to guarantee non-discriminatory treatment of communications (30.04.2019)

(Contribution by Danae Tapia, Mozilla fellow at EDRi member Bits of Freedom, the Netherlands)

24 Apr 2019

What the YouTube and Facebook statistics aren’t telling us

By Bits of Freedom

After the recent attack against a mosque in New Zealand, the large social media platforms published figures on their efforts to limit the spread of the video of the attack. What do those figures tell us?

Attack on their reputation

Terrorism presents a challenge for all of us – and therefore also for the dominant platforms that many people use for their digital communications. These platforms had to work hard to limit the spread of the attacker’s live stream. Even just to limit the reputational damage.

And that, of course, is why companies like Facebook and YouTube published statistics afterwards. All of that went to show that it was all very complex, but that they had done their utmost. YouTube reported that a new version of the video was uploaded every second during the first hours after the attack. Facebook said that it blocked one and a half million uploads in the first 24 hours.

Figures that are virtually meaningless

Those figures might look nice in the media but without a whole lot more detail they are not very meaningful. They don’t say much about the effectiveness with which the spread of the video was prevented, and even less about the unintended consequences of those efforts. Both platforms had very little to say about the uploads they had missed, which were therefore not removed.

In violation of their own rules

There’s more the figures do not show: How many unrelated videos have been wrongfully removed by automatic filters? Facebook says, for example: “Out of respect for the people affected by this tragedy and the concerns of local authorities, we’re also removing all edited versions of the video that do not show graphic content.” This is information that is apparently not in violation of the rules of the platform (or even the law), but that is blocked out of deference to the next of kin.

However empathetic that might be, it also shows how much our public debate depends on the whims of one commercial company. What happens to videos of journalists reporting on the events? Or to a video by a victim’s relative, who uses parts of the recording in a commemorative video of her or his own? In short, it’s very problematic for a dominant platform to make such decisions.

Blind to the context

Similar decisions are already taken today. Between 2012 and 2018, YouTube took down more than ten percent of the videos of the Syrian Archive account. The Syrian Archive is a project dedicated to curating visual documentation relating to human rights violations in Syria. The footage documented those violations as well as their terrible consequences. YouTube’s algorithms only saw “violent extremism”, and took down the videos. Apparently, the filters didn’t properly recognise the context. Publishing such a video can be intended to recruit others to armed conflict, but can just as well be documentation of that armed conflict. Everything depends on the intent of the uploader and the context in which it is placed. The automated filters have no regard for the objective, and are blind to the context.

Anything but transparent

Such automated filters usually work on the basis of a mathematical summary of a video. If the summary of an uploaded video is on a list of summaries of terrorist videos, the upload is refused. The dominant platforms work together to compile this list, but they’re all very secretive about it. Outsiders do not know which videos are on it. Of course, that starts with the definition of “terrorism”. It is often far from clear whether something falls within that definition.

The definition also differs between countries in which these platforms are active. That makes it even more difficult to use the list; platforms have little regard for national borders. If such an automatic filter were to function properly, it would still block too much in one country and too little in another.

Objecting can be too high a hurdle

As mentioned, the published figures don’t say anything about the number of videos that were wrongfully removed. Of course, that number is a lot harder to measure. Platforms could be asked to provide the number of objections to a decision to block or remove content, but those figures would say little. That’s because the procedure for such a request is often cumbersome and lengthy, and often enough, uploaders will just decide it’s not worth the effort, even if the process would eventually have let them publish their video.

One measure cannot solve this problem

It’s unlikely that the problem could be solved with better computers or more human moderators. It just isn’t possible to service the whole world with one interface and one moderation policy. What is problematic is that we have allowed to create an online environment dominated by a small number of dominant platforms that today hold the power to decide what gets published and what doesn’t.

What the YouTube and Facebook statistics aren’t telling us (18.04.2019)

What the YouTube and Facebook statistics aren’t telling us (only in Dutch, 08.04.2019)

(Contribution by Rejo Zenger, EDRi member Bits of Freedom; translation to English by two volunteers of Bits of Freedom, one of them being Joris Brakkee)

13 Mar 2019

What will happen to our memes?

By Bits of Freedom

In Europe, new rules concerning copyright are being created that could change the internet fundamentally. The consequences that the upload filters included in the EU copyright Directive proposal will have for our creativity online raise concerns. Will everything we want to post to the internet have to pass through “censorship machines”? If the proposed Directive is adopted and implemented, what will happen to your memes, for example?

The proposal that will shortly be voted on by the European Parliament contain new rules regarding copyright enforcement. Websites would have to check every upload that is made by their users for possible breaches of copyright, and must block this content when in doubt. Even though memes are often extracted from a movie, well-known photo or video clip, advocates of the legislation repeat time and again that this doesn’t mean memes will disappear − they reason that exceptions will be made for that. In practice, however, such an exception does not seem workable and impairs the speed and thus the essence of memes. It will be impossible for an automated filter to capture the memes’ context.

Step 1: You upload a meme

Imagine that you’re watching a series and you see an image that you would like to share with your friends − it could be something funny or recognisable to a large group of people. Or that you use an existing meme to illustrate a post on social media. Maybe you adjust the meme with the names of your friends or the topic that concerns you at that moment. Then you upload it on Youtube, Twitter or another online platform.

Step 2: Your upload is being filtered

If the new Directive – as currently proposed – is implemented, the platform will be obliged to avoid any copyrighted material from appearing online. In order to abide the legislation, they will install automated filters that compare all material imported into the platform with all the copyrighted material. In case there is a match, the upload will subsequently be blocked. This will also be the case with the meme you intended to share online, because it originates from the television series, video clip or movie. You get the message: “Sorry, we are not allowed to publish this.”

Step 3: It’s your turn

What!? What about the exception that was supposed to be there for memes? Of course the exception is still there, but in practice it’s impossible to train filters to know the context of every image. How does a filter know what is a meme and what isn’t? How do these filters keep learning about new memes that appear every day? There are already many examples of filters that fail. Hence, you’ll need to get to work. Just like you can appeal against the online platforms’ decision when it has wrongfully blocked a picture for depicting “nudity” or “violence”, you will be able to appeal when your meme couldn’t pass the filter. That probably means that you’ll need to fill in a form in which you explain that it’s just a meme and explain why you think it should be allowed to be uploaded.

Step 4: Patience, please

After the form is filled in and you click “send”, all you can do is wait. Just like already is the case with filters of Youtube and Facebook: the incorrectly filtered posts need to be checked by real human beings, people that can assess the context and hopefully come to the conclusion that your image really is a meme. But that process can take a while… It’s a pity, because your meme was responding perfectly to current events. Swiftness, creativity and familiarity are three key elements of a meme. With upload filters, to keep the familiarity, you lose the swiftness.

Step 5: Your meme will still be posted online − or not?

At a certain moment in time, you receive a message. Either your upload has been finally accepted, or there still might be enough reasons to refuse it from being uploaded. And then what? Will you try again at another platform? That might take some days as well. The fun and power of memes is often the speed in which someone responds to a proposal of a politician, or an answer in a game show. Therefore you shouldn’t let Article 13 destroy your creativity!

#SaveYourInternet as we know it! Call a Member of the European Parlement (for free) through pledge2019.eu!

Bits of Freedom

What will happen to our memes? (11.03.2019) https://www.bitsoffreedom.nl/2019/03/11/what-will-happen-to-our-memes/

What will happen to our memes? (only in Dutch, 11.03.2019) https://www.bitsoffreedom.nl/2019/03/04/wat-gebeurt-er-straks-met-onze-memes/


Save Your Internet

(Contribution by Esther Crabbendam, EDRi member Bits of Freedom, the Netherlands; translation by Winnie van Nunen)

27 Feb 2019

You cannot post “a bag of bones” on Facebook

By Bits of Freedom

However shocking our reality may be, sometimes you have to face it. By censoring a news article about the horrific war in Yemen, Facebook completely disqualifies itself as a platform for public debate.

This story should be heard

“Chest heaving and eyes fluttering, the 3-year-old boy lay silently on a hospital bed in the highland town of Hajjah, a bag of bones fighting for breath.” This is the first sentence of an article by the New York Times about the war in Yemen. But the article actually starts with a photo. Below the headline and above this first paragraph a picture of the seven-year-old Amal Hussain fills the screen. The picture is harrowing.

The article tells of the horrors of the unimaginable humanitarian disaster that is taking place in Yemen. For the third time in 20 years the United Nations is about to officially speak of famine. This story must be told and heard, no matter how painful it may be.

Censorship, censorship, censorship

That was also the opinion of freelance journalist Shady Grove Oliver, who shared the New York Times article with her followers on Facebook. Soon the post was removed because it was supposedly in violation of Facebook’s Community Standards. Why? The photo accompanying the newspaper article contained “nudity or sexual activity”, according to Facebook.

The journalist pointed out this shameful mistake to Facebook, but the platform stuck to its decision. Persevering, Grove Oliver asked for a real review by an actual human being. In a message to Facebook, she referred to an article by the editors of the New York Times in which the newspaper accounts for its decision to confront readers with the shocking images. Facebook still refused to reconsider its decision and instead blocked Grove Oliver’s entire account. Only hours later the account and the posts were shown again.

Fauxpologies from Facebook

On the same day as the article, the New York Times published an extensive piece in which it explains why it made the difficult decision to publish these photographs. “This is our job as journalists: to bear to give voice to those who are otherwise abandoned, victimized and forgotten.” This stands in stark contrast to the way Facebook dealt with this important story. Firstly, Facebook’s content moderation policy is apparently so blunt that it confuses photos of emaciated children with “nudity or sexual activity”.

Secondly the journalist, once Facebook realised its mistake, received the usual clumsy fauxpologies from the company. The apologies were shown in a screen entitled “Warning”, followed by a text indicating that Grove Oliver must confirm that she “understood”. No explanation about how it’s possible that this happened, how bad Facebook thinks this is, or what it learned from this. And, yes, what happened next is unfortunately no surprise: a few hours later another of Grover Oliver’s posts was censored.

Facebook disqualifies itself (again)

It isn’t the photos of children that are shocking, but what is happening to these children in Yemen. And we must be confronted with that story. However painful it may be, we should not look away from it en masse. Reality is often harsh. It isn’t bad at all that we are confronted with it from time to time. It isn’t bad at all that it sometimes makes us feel a little queasy. That confrontation, that queasy feeling, are sometimes the driving force behind change. That’s why the New York Times writes: “we are asking you to look.”

Facebook’s mission is to bring “the world closer together.” But how do we get closer to each other as long as we are not allowed to see the suffering of others? When images of children who are victims of a horrible war are simply brushed away? Don’t these children belong to “the world” Facebook has in mind? And why do we still have faith in a company that cannot distinguish famine from sex? Or indeed: that it might not even want to?

Once again Facebook has completely disqualified itself as a place for public debate. With its dominant position, the company stands in the way of a critical view of the atrocities of our time. We urgently need to review how we want to communicate with each other.

You cannot post “a bag of bones” on Facebook (only in Dutch, 19.12.2018)

The New York Times: The Tragedy of Saudi Arabia’s War (26.10.2018)

Tweets by Shady Grove Oliver (16.12.2018) https://twitter.com/ShadyGroveO/status/1074426791736107019

Why We Are Publishing Haunting Photos of Emaciated Yemeni Children (26.10.2018)

(Contribution by Evelyn Austin and Rejo Zenger, EDRi member Bits of Freedom, the Netherlands; translation from Dutch to English by Martin van Veen)



13 Feb 2019

Time for better net neutrality rules

By Bits of Freedom

A Dutch court struck a blow against strong net neutrality protections. According to the court, the mobile operator T-Mobile may continue to provide certain music services with preferential treatment to its customers in the Netherlands − a disappointing judgment showing the need for better rules.

edri.org/wp-content/uploads/2015/09/Supporters_banner.png” alt=”—————————————————————–
Support our work – make a recurrent donation!
—————————————————————–” width=”600″ height=”50″>

T-Mobile has thrown the principle of net neutrality overboard with their “Data-Free Music” service. This service provides certain music streaming services with preferential treatment over other services, as long as they fulfil the conditions set by T-Mobile. This practice is called “zero rating”. To get preferential treatment, the provider of the service must not only fit within the mold of a “music streaming service” as defined by T-Mobile, it must also meet the legal and technical requirements, again set by T-Mobile. This means that music streaming services that do not make it to this list are in a disadvantageous position compared to its listed competitors.

In 2018, Dutch EDRi member Bits of Freedom appealed the decision of the national regulatory authority (ACM) not to act against T-Mobile’s “Data-free Music” service. The administrative court of first instance ruled in favour of T-Mobile: the service does not violate the net neutrality rules and the ACM does not have to act.

Unfortunately, due to procedural reasons, the court does not get to a substantive judgment on the first part of the appeal. Bits of Freedom argued that the European net neutrality rules prohibit the preferential treatment of traffic from certain services by not charging this traffic to users. The court defers to its previous judgment about this service in a case between T-Mobile and the ACM. In that judgment, it ruled that the prohibition on unequal treatment of traffic is limited to the technical treatment of traffic. The economic treatment of traffic is not covered, according to the court. The court considers this ruling to also bind the court in the case Bits of Freedom appealed.

The non-discrimination principle contained in the European net neutrality guidelines should apply to the treatment of traffic, regardless of whether that treatment involves delaying or blocking traffic or applying a different price for traffic. It is true that certain forms of differential technical treatment of traffic (so called “traffic management”) are admissible under the net neutrality rules, but this does not automatically mean that the general standard to treat traffic equally is limited only to the technical treatment of traffic.

In the second part of the appeal, Bits of Freedom explains why the zero rating service “Data-Free Music” limits the rights of end users and is therefore in violation of the net neutrality rules. An essential part of net neutrality is that an internet user is free to determine which information, services or applications they use, without interference by an internet access provider. T-Mobile is doing exactly that: it influences how certain services are treated for economic reasons. The ACM did not agree on this argument, and the court of first instance upheld ACM’s decision, unfortunately.

This judgment makes it clear that the current interpretation of the European net neutrality rules by ACM and the Dutch court is a deterioration compared to the previous net neutrality rules in the Netherlands since 2012 and before the European law came into effect. Under those rules internet access providers could not treat services unequally by charging different prices for data traffic. Dutch internet users are therefore currently less protected against practices that undermine the open and innovative nature of the internet.

In 2019, the Body of European Regulators (BEREC) will review the guidelines that protect net neutrality. The current judgment shows that it is essential that the application of the rules about zero-rating, and ultimately the rules themselves, will be improved. Only this will ensure strong net neutrality in Europe.

Bits of Freedom

Time for better net neutrality rules (06.02.2019)

Bits of Freedom’s court case about zero rating (06.08.2018)

Judgment of the administrative court of first instance (only in Dutch, 24.01.2019)

T-Mobile treats everyone equally unequally (21.02.2018)

(Contribution by David Korteweg, EDRi member Bits of Freedom, the Netherlands)



28 Jan 2019

Period tracker apps – where does your data end up?

By Bits of Freedom

More and more women use a period tracker: an app that keeps track of your menstrual cycle. However, these apps do not always treat the intimate data that you share with them carefully.

----------------------------------------------------------------- Support our work - make a recurrent donation! https://edri.org/supporters/ -----------------------------------------------------------------

An app that notifies you when to expect your period or when you are fertile can be useful, for example to predict when you can expect to suffer the side effects that for a lot of women come with being on your period. In itself, keeping track of your cycle is nothing new: putting marks in your diary or on your calendar have always been an easy way to take your cycle into account. But sharing data on the workings of your body with an app is more risky.

There seems to be quite a large market for period tracker apps. From “Ladytimer Maandstonden Cyclus Kalender” to “Magic Teen Girl Period Tracker”, from “Vrouwenkalender” to “Flo” – all neatly lined up in different shades of pink in the appstore. “Femtech” is seen as a growing market that has raised billion-dollar investments over the last couple of years by different startups. Are these apps made to provide women with more insight into the workings of their bodies, or to monetise that need?

It’s interesting to look at the kind of data these apps collect. The app usually opens with a calendar overview. In the overview you can input the date of your last period. In addition, you can keep a daily record of how you feel (happy, unhappy, annoyed) and whether you experience blood loss. But for most of these apps it doesn’t end there. Have you had sex? And if so, with or without protection? With yourself or with another person? How would you grade the orgasm? Did you have a stomach ache? Were your bowel movements normal? Did you feel like having sex? Sensitive breasts? An acne problem? Did you drink alcohol? Exercise? Did you eat healthy?

For a number of these questions it is understandable why answering them might be useful, if the app wants to learn to predict in what stage of your cycle you are. But a lot of these questions are quite intimate. And all this sensitive data often seems to end up in possession of the company behind the app. The logical question then is: What exactly does a company do with all this data you hand over? Do you have any say in that? Do they treat it carefully? Is the data shared with other parties?

After digging through a number of privacy statements, it appears that one of the most used apps in the Netherlands, “Menstruatie Kalender”, gives Facebook the permission to show in-app advertisements. It’s not clear what information Facebook gathers about you from the app to show you advertisements. For example, does Facebook get information on when you are having your period?

Another frequently used app in the Netherlands is “Clue”. It’s the only one we found that has a comprehensive and easily readable privacy statement. You can use the app without creating an account in which case data is solely stored locally on your phone. If you do choose to create an account you give explicit consent to share your data with the company. In that case it is stored on secure servers. With your consent it will also be used for academic research into women’s health.

This can not be said of many other apps. Their privacy statements are often long and difficult to read, and require good reading-between-the-lines skills to understand that data is being shared with “partners”. It’s possible that the sensitiveness of your breasts in itself is not very interesting to an advertiser, but by keeping track of your cycle the apps automatically acquire information on the possible start of one of the most interesting periods of your life for marketeers: motherhood.

The most extreme example is Glow, the company behind the period tracker app “Eve”. Their app is focused on the potential desire to have children. The company’s tagline is as straightforward as they come: “Women are 40% more likely to conceive when using Glow as a fertility tracker”. Besides Eve, Glow has three other apps: an ovulation and fertility tracker, a baby tracker and a pregnancy tracker. The apps link to the Glow-community, a network of forums where hundreds of women share their experiences and give each other tips.

But that’s not the only thing that Glow offers. You can’t use a Glow webpage or app without being shown the “Fertility Program”. For 1200-7000 euro, you can enroll in different fertility programs. Too expensive? You are able to take out a cheap loan through a partnership with a bank. And in the end, freezing your eggs, if you are in your early thirties, is the most economically viable option, according to the website.

Turns out that Glow is a company selling fertility products. It has built a number of apps to subtly (and sometimes not so subtly) attract more female customers. As a consumer you think you are using an app for keeping track of your cycle, but in the meantime you are constantly notified of all the possibilities of freezing your eggs, the costs of pregnancy at a higher age, and your limited fertile years. Before you know it, you are lying awake at age 30, wondering whether it would be more “economical” to freeze your eggs.

These apps shed light on what seems to be a contract to which we are forced to consent more and more often. In exchange for the use of an app that makes our lives a little bit easier, we have to give away a lot of personal information, without knowing exactly what happens with it. The fact that these apps deal with intimate information doesn’t mean that the creators treat it more carefully. To the contrary: it increases the market value of that data.

So before you download one of these apps, or advise your daughter to download one, think again. Take your time to read an app’s privacy statement, to know exactly what the company does with your data. But there is also a responsibility for the regulatory body, such as the Autoriteit Persoonsgegevens in the Netherlands, to ensure companies don’t abuse your intimate data.

Are you using one of these apps and do you want to know which data the company has gathered on you, or do you want to have that data erased? You can easily draw up a request which you can send by mail or email using My Data Done Right.

Bits of Freedom

Who profits from period trackers? (25.01.2019)

Who benefits from cycle trackers? (only in Dutch, 03.12.2018)

(Contribution by EDRi member Bits of Freedom; translated from Dutch by volunteer Axel Leering)



23 Jan 2019

EDRi’s Kirsten Fiedler wins Privacy Award


On 22 January, Kirsten Fiedler, current Senior Policy and Campaigns Manager and former Managing Director of European Digital Rights, received the distinguished Felipe Rodriguez Award in celebration of her remarkable contribution to our right to privacy in the digital age.

Why should we defend digital rights and freedoms when there are really pressing and often life-threatening issues out there to fight for? The reason is that the internet and digital communications are seeping into every part of our lives, so our rights online are the basis for everything else we do.

said Fiedler.

I’d like to accept this award on behalf of the entire EDRi team and network. Our strength is in collective, collaborative actions.

Fiedler’s relentless efforts have been crucial to transforming the EDRi Brussels Office from a one-person entity into the current professional organisation with eight staff members. In addition to this, she played an instrumental role in EDRi’s campaigns against ACTA and privatised law enforcement, and has been the engine to the EDRi Brussels office’s growth during the past years.

The Felipe Rodriguez Award is part of the Dutch Big Brother Awards, organised by the EDRi member Bits of Freedom. Previous winners include Kashmir Hill, Open Whisper Systems, Max Schrems, and Edward Snowden. The award ceremony took place on 22 January 2019 in Amsterdam.

Read and watch the full speech here.

Photo: Jeroen Mooijman

Bits of Freedom announces winner of privacy award (09.01.2019)

“Our digital rights are the basis for everything we do.” (23.06.2019)