The Trump Administration’s aggressive foreign policy has reignited debates about the proper role of regulation in Artificial Intelligence. In February, at an important AI summit in Europe, Vice President JD Vance criticized the EU for the Digital Services Act and the EU AI Act which he argued stifled innovation. Some countries, like the UK have strategically held off on regulation in response to these developments while many European leaders have doubled down on their efforts to reign in big tech.
But these debates about regulation and deregulation take us away from other pressing questions about how such a policy “gets done” and how lofty “values” are put into practice, which we must consider as the AI Act comes into effect over the course of 2025. For example, the AI Act includes the following seven high level principles:
- human agency and oversight
- technical robustness and safety
- privacy and data governance
- transparency
- diversity, non-discrimination and fairness
- societal and environmental well-being
- accountability
Few would argue with the importance of these principles for AI, but many people have complained about how these values are realised in the Act. Busuioc, Curtin and Almada single out transparency, noting that there has been a shift over the years in relation to machine learning in which “transparency” has come to mean “transparency of model decision making”, that is attempting to open the black box of algorithms. Transparency, they argue, used to mean companies opening up their human records and financial bookkeeping, becoming accountable to other parties and watchdogs.
This raises an important question. How is it that values get “narrowed” like this – where they become reduced in scope, funnelled into something technical or easy to implement. Is this simply inevitable part of the movement from principles to practices or is there something more troubling going on?
In a recently published article I wrote with Sonja Trifuljesko, we consider this question of what it means to “put principles into practice”. Based on long term ethnographic fieldwork with an AI Ethics start-up in Finland, the article tells the story of the creation of a register for AI systems in city governments in Helsinki and Amsterdam. This initiative was an explicit response to the repeated mantra of “putting principles into practice”. It was intended to embody transparency.
Our point in telling this story is to complicate the easy assumption that we can simply implement principles in practices, “downstream” as it were. Principles, by their nature, do not come with “how to instructions” for how they are to be used in different contexts and situations. We show that even though the AI start-up began with a host of principles like transparency, fairness etc., there was a complex search process for which principles could be actionable and how they related to each other. For the start-up, transparency was seen as the principle which unlocked all the others.
But with transparency in hand, the team still had to decide which practices could embody it. They settled on the idea of a register where AI developers could disclose different aspects of models in use (possible risks, fairness algorithms, privacy considerations). This list of disclosures was carried out in a consultation with AI developers, city governments and members of the public and was refined over time. In the final stages, when the register was turned into a public facing website, they had to implement different levels of transparency for different types of audiences, from casual observers to journalists to the AI developers themselves. Some sleight of hand and some ambiguity were necessary for these audiences to interpret this narrow set of interventions as a realisation of transparency itself.
So this was not a simple linear process but involved a constant shuttling back and forth between principles and practices, the more abstract and the more concrete.
In this sense we might say that, rather than a messy set of practices being made to conform to the abstract principle, transparency itself was also transformed by the process (at least in its connotations and associations). Transparency was “narrowed,” coming to mean merely self-reporting of information. But how did this happen? There were, of course, practical constraints about what one could force companies to do and there are also cultural shifts in the meaning of words as noted by Busuioc and co-authors. But, in addition, we show that implementing values means assembling interested parties who inevitably steer what that value can mean. This narrowing need not happen consciously or maliciously but is a function of assembling the network of advocates and users.
How do we evaluate this assembling? What we argue in the article is that we need to consider not just the end results or the consequences but the entire process. Some narrowing is inevitable, not every and all actions can be taken in relation to a principle. But we might ask how reversable this narrowing is – how responsive it is to complaints by outsiders. And rather than judging actions by single principles we might consider the entire field of possible principles with which to evaluate actions.
This begs another question: why, if this process is so messy, do we ever start with principles in the first place? Why not start in the middle of the action, at the heart of whatever problem we are trying to solve, and decide which principles are relevant to it? This is something that Applied Ethicists and scholars in Valuation Studies have been arguing for years. So why are we, in AI Ethics at least, so stuck on abstract principles?
One reason is that working with principles and abstractions is easier – they can effortlessly be shared and move around. There is no need to get mired in the dirty work of social organizing or policing local – because this is often someone else’s job. Another reason, which I will deal with in a subsequent blogpost, is the convenient resonance with abstractions in computing. Principleism seems conducive to the engineering mindset which desires clean and reproducible solutions to problems. This comes from a desire to make human life more machine like, to iron out the mess and emotions of organisations. But as our article shows, implementing abstractions is inevitably a messy process and these complications are ignored at our peril.
**
David Moats is a social scientist, writer and artist, based in London. He is currently a Lecturer in Digital Humanities at Kings College London and works in the Reimagine ADM project lead by professor Minna Ruckenstein.