The rush to implement artificial intelligence tools and algorithmic systems has sparked a critical conversation on their threat to public values, such as privacy, solidarity, autonomy, equality, and trust. As technological advancements take shape, there is a growing concern that core public values are overlooked and replaced by narrower, technocentric approaches. Such a narrowing of ways technologies are imagined and deployed leads to prioritizing efficiency, surveillance and consumer choice over openness and trust, reducing values to matters of individual preference and choice. Rather than simply critiquing values like efficiency, however, our project views the role of technology in society as embedded in various value aims. Technology mediates values, rather than exhibits them.
A focus on values as they emerge
Our project is a collaborative effort by an international consortium of researchers, pooling their diverse expertise in algorithmic systems to study how values are being articulated and put into action in automation-related developments. With our empirical material from Finland, Sweden, Denmark, Belgium, Slovenia, the UK, and the EU, we advocate a perspective that reaches beyond technical and binary ways of approaching values. Efficiency, for instance, is not inherently detrimental to public values, but its impact in relation to other values needs to be examined. We are exploring how values are negotiated, balanced, compromised, and aligned within constellations of people and algorithmic systems, and in relation to societal concerns such as safety and climate crisis.
We call for a more grounded understanding of the role values play, not as static entities to be steered or programmed, but as dynamic constructs and aims that evolve through the interactions between humans and machines over time. For instance, Sarah Pink (2024) and colleagues offer an insightful way to consider this, challenging the notion of trust as a transactional element in human-technology interactions. They propose seeing trust as more of “a feel” – an anticipation of future events. This viewpoint, enriched by ethnographic studies, indicates that placing trust in technology, individuals, or organizations does not guarantee their trustworthiness. AI systems cannot be intrinsically labeled as trustworthy or ethical because trust and ethics are context-dependent characteristics rather than fixed qualities or tangible assets that can be obtained or owned.
Pink and colleagues envision trust as a process, or unifying element bridging various disciplines, stakeholders, practitioners, and technologies, thereby facilitating a more comprehensive exploration of trust within the realm of artificial intelligence. It goes without saying that this perspective raises complex questions concerning the role of trust in the design and development of algorithmic systems, as well as the ethical implications for AI and the training of its developers. Yet, it is precisely these more challenging questions that should be of concern for those genuinely invested in the concept of trustworthy AI.
Value clashes and patterned responses
In our project, we are currently working on conceptual frameworks that allow us to highlight the importance of people behind the algorithms - their intentions, practices, and the decisions they make when building, promoting, and evaluating algorithmic systems. Perle Møhl’s (2019; 2020) work on the implementation of facial recognition in border control is exemplary in this regard, as it demonstrates how automated systems were promoted as a response to so-called migration crisis and the threats related to illegal border-crossing. The values activated by way of technological solutions had to do with security and safety, and trust in facial recognition as efficient and tamperproof. The research showed, however, that the automated systems were not more efficient than manual border-guarding, and they clashed with a wide range of other civic values such as privacy, human safety, and solidarity, with wider geopolitical ramifications. Automated technologies were also in tension with human skills and professional identities, raising questions about whose expertise counts. Our shared goal is to identify patterns of adaptation, resistance, and renewal as people reform and protect their work processes and identities under the pressure and excitement of adopting algorithmic systems.
Vocabularies for future visions
We explore the technical features and material contexts that define algorithmic systems from a variety of perspectives. The massive energy demands of AI infrastructures will have an influence on how we imagine algorithmic futures. Julia Velkova (2024) demonstrates in her work in Sweden how tech companies that connect data centers to electricity transmission grids not only augment computational resources but also divert grid capacities from other sectors that compete for grid power. In practice, this means that in some regions electricity allotted for data centers hits the limits of power grids and leads to what we call infrastructural gentrification, the pushing of local industries and other actors elsewhere. From this perspective, energy demand is not only an environmental issue but a public value issue that concerns equal access to public resources - in this case, electricity - that we, in the Nordics and Europe have come to take for granted. Given that data centers are also large users of communal water and land, infrastructural gentrification extends to questions of resource governance and underscores how certain public values are privileged at the expense of others.
This example from Sweden articulates our further aim: to develop alternative vocabularies, ranging from metaphors and concepts to locally resonant notions of infrastructures, everyday practices and expressed sentiments about technologies. While many see AI development as an inevitable destiny, we show how massive computation will require all of us to consider infrastructural limits that cannot be stretched with just a click of a mouse. We are interested in concepts that are grounded in the specificity of empirical cases and practices but can also be used to discuss more generally observable developments. By finding empirically robust and imaginative ways to address values, we promote societally sensitive understandings of algorithmic systems, underlining the importance of shared vocabularies for articulating future visions.
Collaborative ways forward
Empirical investigations into how values are expressed throughout the lifecycle of algorithmic systems – from pilot stages to human-machine interactions and societal impacts – can uncover how values clash and are balanced by means of algorithmic systems and in relation to them. For instance, looking back at our past ethnographic cases, we examine threshold values, automated tags, and triggers within algorithmic systems to identify manifestations of values, their alignments, and conflicts. Methodologically, we lean towards ethnographic, participatory, and even action-oriented research, thinking together within our research group and with our stakeholders how to proactively think about how algorithmic systems tend to promote different value aims. By studying and revisiting empirical cases with a focus on public values and employing mixed-method, cross-sectoral approaches, we strive to establish new routes and openings in the social scientific study of algorithmic systems. This is an ambitious undertaking, and we recognize that it cannot be accomplished alone. We are collaborating with a large group, including researchers, industry professionals, and civil servants, and hope that our research will resonate with other scholarly efforts in this realm and inspire further cooperation.
In the coming months, we will be reporting our work by way of blog posts. Stay tuned!
On behalf of the Chanse project team,
Minna Ruckenstein
Project Leader
References:
Velkova J (2024) Dismantling Public Values One Data Center at the Time NordMedia Network.