Digital trade: the new frontline in the fight for our rights
The EU is signing digital trade deals that could undermine fundamental rights and block oversight of software systems shaping our lives. From data protection to algorithmic accountability, these agreements risk empowering opaque systems - used by both companies and governments - at the expense of the people most affected by them.
What does ‘digital trade’ really mean?
Digital trade sounds harmless. It suggests online services and digital innovation. And to be clear, digital trade in itself is not the problem. But when the rules are negotiated in opaque processes without meaningful civil society engagement, and without proper safeguards for rights, the risks become serious.
The EU is now signing stand-alone Digital Trade Agreements (DTAs) for the first time, starting with Singapore and South Korea. These agreements pick up and expand on a problematic trend: already in previous free trade agreements, digital chapters – dealing with issues like data flows and software governance – had started to appear, raising concerns about regulatory autonomy and rights protection. Now, in these stand-alone deals, the risks are even more pronounced.
As EDRi’s new background paper shows, what is being agreed now could have ripple effects for years to come – affecting everything from data protection enforcement to future rules on algorithmic harms. Without strong resistance, DTAs could lock the EU into a model of economic governance that puts commercial interests above people’s rights. The current wave of trade bullying makes it even more urgent to defend the EU’s ability to protect people’s rights in the digital age.
Read EDRi’s full background paper (PDF)
Digital borders dismantled – without strong enough safeguards
It is often claimed that we need as much data as possible to fuel competitiveness, and that cross-border data flows can be ‘trusted’ so long as a few reassurances are given. Neither claim holds up to scrutiny. More data does not automatically mean better outcomes – it often simply fuels extractive business models, deepens systemic discrimination, and worsens the environmental footprint of an already unsustainable digital economy. Trust is not a magic word: dismantling digital borders without firm safeguards undermines people’s rights in practice, not just in theory.
Among the most concerning elements are the provisions that favour cross-border data flows, even when the receiving country offers much weaker protections. It’s not simply that data can flow abroad – it’s that the EU could lose its ability to stop such transfers, even when rights are endangered.
For example, EU data (personal and otherwise) could be sent to Singapore, where public authorities are exempt from data protection laws and surveillance is pervasive. Once there, the data could be accessed, shared, or transferred onwards to third countries, without the strong safeguards the GDPR requires in the case of personal data, but also other safeguards that the Data Act specifies. This means not only exposure to weaker protections abroad, but also exposure to new risks once the data is moved again beyond the initial destination.
Those most at risk are often the same communities already targeted by surveillance and data-driven harms: migrants, racialised groups, and others whose personal information could be exploited or mishandled without meaningful ways to seek redress.
When regulators are locked out of the system
Another key danger lies in provisions that restrict governments’ ability to require access to source code or the logic behind software systems. While framed as protecting business secrets, these clauses undermine the public’s ability to understand or challenge how automated decisions are made, even when those decisions have serious consequences for people’s lives.
This isn’t just about artificial intelligence. Many software-based tools – welfare eligibility systems, hiring algorithms, predictive policing tools, biometric verification platforms – use automated logic that can reproduce bias or hide discrimination. Without the ability to examine these systems, authorities cannot properly enforce laws like the GDPR, the AI Act, or even basic anti-discrimination rules.
These harms are not theoretical. Disabled people wrongly denied benefits. Gig workers algorithmically fired without explanation. Trans people excluded from services because of flawed verification software. Without meaningful transparency, these injustices become harder to detect – and harder to fix.
Why exceptions are not enough
Trade agreements include exceptions that claim to protect public interest, such as allowing data flow restrictions or access to source code in limited cases. However, these exceptions are often weak, vague, and subject to strict legal tests that make them hard to invoke. Experience shows that, in trade law, exceptions are narrowly interpreted, forcing governments to prove that rights-protecting measures are the ‘least trade-restrictive’ option – a near-impossible standard. In practice, this leaves fundamental rights protections legally fragile, exposing measures to disputes and making enforcement highly uncertain.
It’s not just Big Tech – and it’s not just AI
While major tech companies certainly benefit from digital trade rules, the protections extend across both public and private sectors. Governments, contractors, and platforms deploying all kinds of software-based systems or benefiting from access to data would enjoy greater insulation from scrutiny.
And these systems are not always high-tech or labelled ‘AI’. Often, simple ranking algorithms, scoring models or decision-making software can have profound effects on people’s lives. Yet the rules being cemented in trade agreements could make it harder for regulators and affected communities to even know what systems are in use, let alone demand accountability.
This dynamic mirrors older trade strategies. In the past, pharmaceutical industries and copyright holders used trade agreements to limit access to medicines or impose heavy-handed enforcement rules. Now, digital trade agreements risk doing the same for the governance of data, software, and automation.
Trade has always shaped digital rights
Trade policy has been a critical front in EDRi’s work throughout its history. From opposing dangerous provisions in early trade deals to resisting some of the worst digital clauses in Transatlantic Trade and Investment Partnership (TTIP), a proposed trade agreement between the EU and the US, EDRi and its members have long understood that trade frameworks are often used to restrict rights-based governance.
We are not alone in raising these concerns. Organisations like The European Consumer Organisation (BEUC) and the European Trade Union Confederation (ETUC) have also warned that provisions in these new digital trade agreements could endanger data protection, workers’ rights and consumer safeguards.
The digital rules of the future are being negotiated right now – and if they are negotiated without real safeguards, we may find it much harder to build a digital environment rooted in care, accountability and justice.
