Following the rapid development phase, AI applications are increasingly deployed across various different fields, including societally sensitive domains such as the judiciary and public administration. At the same time, the increasing AI deployment has led to concerns and awareness of the risks and challenges these systems have in reproducing and amplifying societal biases and other negative consequences. One solution is advocated by governments, international organisations and other stakeholders alike to mitigate the risks, i.e. ethical guidelines. Surprisingly, the global push towards AI ethics is leading to emerging consensus over importance of fairness, transparency and accountability as focal ethical principles for AI deployment. But what exactly are the AI problems these policy documents suggest can be solved by such ethical principles? What are the concrete policy actions the documents put forward to overcome these risks? And how do these problematisations reflect the insights of socio-legal research that discusses how technology deployment leads to decrease of discretionary space? In this article, we analyse qualitatively 10 different policy documents on AI ethics in order to find out how AI systems are problematized and how justification for policy action is sought from these formulations. We build our analysis on Carol Bacchi’s poststructural policy analysis approach called What is the Problem Represented to Be?” to demonstrate the difference between the explicit and the implicit assumptions found in the policy documents. Ultimately, the fairness discourses of AI ethics demonstrate a disconnect from the procedural fairness and access to justice narratives inherent in the self-reflection of the judiciary.
Aalto HELDIG DH pizza seminar on Friday 28 February 2020 at 12.00 (Metsätalo room 12, Unioninkatu 40)