The 2024 quadrennial joint meeting of the European Association for the Study of Science and Technology (EASST) and the Society for Social Studies of Science (4S) will take place in Amsterdam on 16-19 July with a theme Making and doing transformations.
Reimagine ADM organizes a panel that takes an empirically grounded perspective on recent attempts to align algorithmic systems with human values. Drawing on work from Valuation Studies, the panel focuses on how collective or shared values are mobilized and negotiated in relation to these systems, broadly conceived.
The Call for Abstracts will open on 27 November and close on 12 February 2024. See full description of the panel below and submit your paper via this link: https://nomadit.co.uk/conference/easst-4s2024/p/14364
Beyond value alignment: invoking, negotiating and implementing values in algorithmic systems
Recent work in machine learning under the heading of ‘value alignment’ seeks to align autonomous systems with ‘human values’ (Russell 2016). Some of this happens through the mathematical formalization of values like ‘fairness’, while approaches like Inverse Reinforcement Learning (IRL) seek to extract a reward function from human preferences or behaviors. Although they are discussed and operationalised in drastically different ways, values seem central to recent discussions of algorithmic systems.
How do these understandings of values, drawing from cognitive psychology and economics, correspond to anthropological (Graeber 2001) or sociological (Skeggs and Loveday 2012) theories or indeed empirical approaches like valuation studies (Helgesson and Muniesa 2013), which see values not as a driver of action but as an upshot of practices? How can individual-level data or preferences be reconciled with more complex collective, shared values? How can we agree on what values to prioritize or how to implement them in practice?
This panel takes an empirically grounded perspective on values in algorithmic systems, broadly conceived. We will explore how (collective) values are invoked, negotiated, and used to settle disputes in this context and examine attempts to invest algorithmic systems with specific values. We invite contributions, including the following:
- Ethnographic and other studies of attempts to translate values into machine learning systems and Automated Decision Making (ADM) in different domains.
- Investigations of such machine learning systems being confronted with (for example, professional) value-laden practices on the ground.
- Empirical analyses of discourses or debates around values in value alignment, AI safety or Fair ML, including divergent interpretations of concepts such as ‘fairness’ and ‘bias.’
- Accounts of disagreements between academic disciplines or professional domains over the meaning of values.
- Critical and reflexive descriptions of interventions (Zuiderent-Jerak 2015) in this space, including attempts to measure or model values computationally.