Digital identity: the tension between participation and control

30 March 2026 · Prisma Team

Every modern democracy faces the same paradox. To participate in society — to vote, access healthcare, open a bank account, or receive a government benefit — you must first prove who you are. Identity is the gateway to civic life. But that same gateway can become a checkpoint. The systems that verify your identity also record your movements, your transactions, your associations. What begins as a condition for participation can quietly become an instrument of control.

This tension is not new. But the digital transformation of public services has amplified it beyond anything previous generations experienced. When identity moves online, it generates data trails that are permanent, searchable, and combinable in ways that paper records never were. Understanding this tension — and building technology that resolves it rather than deepening it — is the central challenge that Prisma was designed to address.

The academic foundation

The scholarly debate about identity in networked societies has deep roots. Two works in particular laid the groundwork for how we think about digital identity today.

In The Power of Identity (1997, revised 2010), Castells argued that the network society fundamentally transforms how identity is constructed and contested. In pre-digital societies, identity was largely given — by your family, your community, your nation. In network societies, identity becomes a project: something you actively construct through the connections you make and the information flows you participate in. But Castells also warned that networks concentrate power. Those who control the infrastructure of connection — the switches, the protocols, the platforms — hold disproportionate influence over whose identity counts and whose does not.

Giddens, in Modernity and Self-Identity (1991), offered a complementary perspective. He described the “reflexive self” — the idea that modern individuals continuously construct and revise their sense of self in response to new information. This reflexive process requires trust in the systems that mediate daily life. When those systems become opaque or unaccountable, the reflexive self is undermined. You cannot meaningfully construct your identity if you do not understand — or cannot control — how your information is being used.

These insights from the 1990s proved prophetic. The digital platforms that emerged in the following decades created precisely the conditions Castells and Giddens warned about: identity systems that are powerful, opaque, and controlled by a very small number of actors.

What went wrong

By the mid-2010s, the theoretical warnings had become concrete reality. Three works document what happened.

Zuboff’s The Age of Surveillance Capitalism (2019) is perhaps the most comprehensive account. She describes how technology companies discovered that the data generated by ordinary human behaviour — searches, clicks, movements, purchases — could be treated as a free raw material. Identity, in this model, is not something you possess. It is something extracted from you, refined into predictions about your future behaviour, and sold on markets you never see. The “behavioural surplus” that Zuboff describes is fundamentally an identity surplus: the gap between what you knowingly share and what is inferred about you without your knowledge or consent.

Couldry and Mejias, in The Costs of Connection (2019), take this analysis further. They argue that the extraction of human life-data by platforms constitutes a new form of colonialism — “data colonialism” — in which the raw material being appropriated is human experience itself. Just as historical colonialism appropriated land and labour, data colonialism appropriates the continuous stream of data that constitutes digital identity. The parallel is not merely rhetorical: the power asymmetries, the lack of meaningful consent, and the concentration of value extraction follow the same structural logic.

Van Dijck’s The Platform Society (2018) examines how this extraction reshapes public institutions. When governments rely on commercial platforms for essential services — communication, education, health — they effectively delegate identity governance to private actors. The platform becomes the intermediary between citizen and state, with its own commercial logic shaping what is visible, accessible, and permissible.

These are not abstract concerns. The Snowden revelations in 2013 demonstrated that intelligence agencies had built mass surveillance systems on top of commercial data infrastructure (the PRISM programme). The Schrems II ruling in 2020 confirmed that US law structurally prevents adequate protection of European identity data. And the ongoing controversy around major technology vendors’ contracts with international institutions — including military and intelligence bodies — illustrates how commercial identity infrastructure can serve purposes far removed from the interests of the individuals it profiles.

The ethical response

If the problem is structural, the response must also be structural. Two thinkers have been particularly influential in shaping how the European policy community approaches digital identity.

Floridi, through works including The Onlife Manifesto (2015) and The Ethics of Artificial Intelligence (2023), has argued that the digital and physical worlds are no longer separable. We live “onlife” — in a continuous blend of online and offline experience. In this context, identity is not something you have in one world and represent in another. Your digital identity is part of your identity. Treating digital identity data as a commodity to be traded is therefore not merely a privacy violation — it is a violation of personal integrity. Floridi’s work has been widely cited in EU policy circles, including in the framing of the GDPR, the AI Act, and the European Digital Identity framework.

Nissenbaum’s Privacy in Context (2010) provides the most actionable framework for translating these ethical principles into system design. Her theory of “contextual integrity” holds that privacy is not about secrecy or control in the abstract. It is about ensuring that information flows conform to the norms of the context in which they occur. Medical information shared with your doctor is appropriate; the same information shared with your employer is a violation — not because the data changed, but because the context changed.

This insight is powerful because it can be formalised. If you can describe the norms of a context — who the actors are, what information types are involved, under what conditions transmission is appropriate — you can encode those norms as machine-readable policy. And that is precisely what ODRL 2.2 (the Open Digital Rights Language) enables. Nissenbaum’s contextual integrity, once a philosophical framework, becomes enforceable code.

Privacy is not about hiding. It is about ensuring that information flows respect the context in which they were generated. — Nissenbaum’s core insight, implemented in Prisma as ODRL policy.

Prisma’s answer: W3C DID + ODRL

Prisma translates these academic insights into running infrastructure. The design rests on three W3C standards, each addressing a specific dimension of the identity problem.

W3C Decentralised Identifiers (DID) resolve the Castells problem — the concentration of identity control in network switches. A DID is a globally unique identifier that is not issued or controlled by any central authority. There is no single registry, no single point of failure, and no single entity that can revoke your identity. Each Prisma node generates its own DIDs. Organisations and individuals can prove who they are without depending on a platform, a government, or a corporation to vouch for them. Sovereignty over identity returns to the entity that identity describes.

ODRL 2.2 resolves the Nissenbaum problem — the enforcement of contextual integrity. Every dataset, every API endpoint, every federation query in Prisma carries an ODRL policy that specifies exactly who may access the resource, for what purpose, under what constraints, and with what obligations. These are not vague terms of service. They are machine-readable, machine-enforceable rules. A hospital sharing patient pathway data with a research institute can specify that the data may be used for epidemiological research, must be anonymised before processing, may not be combined with commercial datasets, and must be deleted after three years. The ODRL engine enforces these rules automatically, at every access point.

PROV-O resolves the Floridi problem — accountability in an onlife world. Every action in Prisma generates a provenance record: who accessed what, when, under which policy, and what was done with it. This creates a verifiable audit trail that makes the system accountable not just to its operators but to the individuals and organisations whose data it holds. When a regulator or a citizen asks “who accessed my data and why?”, the answer is not buried in server logs. It is structured, queryable, and cryptographically signed.

Crucially, none of these components depend on Big Tech infrastructure. Prisma runs on EU-sovereign cloud providers. The code is open source, hosted on Codeberg. The standards are W3C specifications, governed by an international community. No single vendor can change the rules, raise the prices, or hand your data to a foreign court.

From theory to practice

It is easy to write about digital sovereignty. It is considerably harder to build it. Many projects in this space remain at the level of white papers and architecture diagrams.

Prisma is different. The federation protocol is operational. ODRL policies are being enforced in real queries. PROV-O provenance records are being generated and stored. DID-based authentication is handling actual access control decisions. The W3C standards we describe are not aspirational — they are the standards our code implements today.

These are not just academic ideas. They are deployed, running software.

The academic tradition from Castells to Nissenbaum gives us the language to understand why digital identity matters and what is at stake when it is mismanaged. The W3C standards community gives us the technical building blocks. Prisma’s contribution is putting them together into a coherent, operational platform that any European organisation can deploy — without asking permission from, or sending data to, anyone outside its own jurisdiction.

The tension between participation and control will not resolve itself. It requires deliberate design choices, grounded in both ethical reasoning and practical engineering. That is what Prisma is for.

Further reading

Academic references

Questions or feedback? Reach us at info@prisma-platform.eu or visit the project on Codeberg.

← Previous: Roadmap 2026 All posts