Algorithmic literacy refers to the ability to understand the features of algorithms, their functioning and the consequences of their use. Literacy also means the capacity to assess the ethics principles used to determine the acceptability of using related applications. The ethics of algorithms, and smart technical solutions in general, is still taking shape, and its frame of reference remains incomplete. Another element of literacy is the ability to identify and itemise the factors that make a situation morally significant. This may be difficult, since moral significance often stems from the combined effect of many factors.
Politics and ethics
Ethics guidelines for artificial intelligence are recommendations on how AI and algorithms should be developed, used and applied. The drawing up of such recommendations began when the number of commercial applications based on machine and deep learning grew exponentially in the mid-2010s.
The quality of these recommendations varies. Some, such as those by UNESCO, which are still being prepared, are ambitious and in many respects of high quality. In fact, they are built on carefully considered and multi-stage long-term processes, with broad engagement of various parties, including specialists in ethics. At the same time, the worst cases are nothing but lists of principles whose concepts are not defined and the thinking behind them not described, and with no theoretical or experimental research offered as justification. Nor are these guidelines given any context, nor indeed is the choice of specific themes as focus areas explained. The appropriateness of choices is not considered, nor are the consequences examined.
Indeed, many guidelines have been criticised for being superficial, abstract and conceptually thin. Moreover, the impact of these guidelines has been questioned. Guidelines and recommendations as such have no legal status, and they do not usually contain concrete operating models or recommendations. Consequently, it is difficult to assess to what extent they have affected the actions of their publishers.
However, many doubters of impact have failed to see a consequence of these guidelines that may be the most meaningful of all: they have largely established a framework for almost all of the discussion on the ethics of AI in recent years. Recommendations have articulated the principles according to which the acceptability of AI applications is assessed, and they have selected the concepts that are employed in the discussion of the ethics of AI.
For example, ethics guidelines are probably the source of the slogan stating that the goal of AI development should be “human-centric, ethically acceptable and reliable artificial intelligence”. That these kinds of guidelines generally do not touch upon ecological issues or take into account the fact that, besides humans, algorithms also change the living conditions of other species, speaks of a human-centric worldview. Environmental and climate awareness emerged as a clear theme of AI ethics only around 2018, at which point “ethically sustainable AI” was increasingly set as a goal instead of “acceptable AI”. And yet, only in the UNESCO guidelines is the theme of sustainable development described in more detail, accompanied by concrete proposals for measures.
The focus on humans is apparent in such guidelines in other ways too, including what is known as the human-in-the-loop principle. This means that artificial intelligence is seen as a support intelligence providing assistance to humans, with emphasis laid, especially with regard to accountability, on the necessity of humans being ultimately responsible for using related systems. This presupposes that the human is kept in the loop, or aware of the development of AI systems, both in terms of information and ethics.
Alongside human-centredness, many other current AI focus areas, such as fairness, discrimination, bias and accountability, echo the emphases and key content of ethics guidelines.
What should be emphasized, risks or goals?
The most interesting feature of ethics guidelines is that their focus is often primarily on threats and risks associated with artificial intelligence. While the goals of guidelines, such as ‘ethically sustainable AI’, are positive in themselves, technology is discussed in lists of principles mainly from the perspective of minimising risks and adverse effects, also known as the non-maleficence principle.
In other words, the guidelines prohibit, restrict and prevent, but do not give much thought to how technology could be used to promote positive goals according to what is known as the beneficence principle. For example, guidelines may state that artificial intelligence should not be developed or used in ways that result in discrimination. No consideration is given to how artificial intelligence could be used to develop a non-discriminatory society. In concrete terms, the difference is significant. The range of tools for preventing discrimination by artificial intelligence solutions is totally different from using AI to prevent discrimination.
Technical solutions are produced, developed and utilised in complex structures, and understanding questions of accountability alone is challenging. Who is responsible for technology, and whose activities should regulation target? Who is responsible for discrimination by artificial intelligence – the coder, the product developer or the user? Nor is it quite clear what such regulation, including a prohibition of algorithmic discrimination, should be aimed at. As face recognition algorithms demonstrate, the same algorithm can be used in acceptable and non-acceptable ways. Consequently, regulation must be targeted at the non-acceptable use of the algorithm instead of the algorithm itself. However, regulating use is not that simple in practice when it comes to a developing technology with a wide range of uses and applications.
Also of significance is the fact that the mere prevention of risks and disadvantages does not engender wellbeing or promote other positive societal goals. A focus on risks and disadvantages often narrows our thinking and prevents us from seeing opportunities. By concentrating on prohibition, prevention and restriction, the possibility of using technical solutions to promote valuable goals is overlooked.
Versatile algorithms
Algorithms can be used to improve educational equality, support the learning of people with learning difficulties, or develop better technical solutions for education. They can help to promote the rights of minorities, develop methods of civic engagement and safeguard democracy. They can be used to improve data protection and cybersecurity. With the help of algorithms, we can also prevent diseases or the accumulation of social problems. They can also help to look for solutions to enormous societal problems, such as climate and energy crises, water shortages, poverty or pandemics.
The business associated with artificial intelligence is worth hundreds of billions of euros, and its internal dynamics can be affected by ethics policies to a surprising degree. Ethics is already, at least in speeches, a competitiveness factor – either a promoter of or an obstacle to competitiveness, depending on your point of view. The ethics of artificial intelligence is also linked to many global policy issues, such as the distribution of global prosperity, the polarisation of technological development, the development of human rights and the rules of algorithmic warfare.
In other words, the ethics of AI is no longer only about assessing ethical acceptability, but also about politics, money and power. The more they are interwoven with the goals of AI development, the more we need to discuss the goals of that development. The lack of analytical exploration of the positive goals of AI and algorithmisation development is perhaps the biggest flaw in the current discussion on ethics. More than anything else, however, we may need well-thought-out and carefully crafted opinions on what our fundamental objectives for algorithmisation are.
This text is an abridged and edited version of the ‘Algoritmien aakkoset’ article originally published in a Think Corner paperback entitled Älykäs huominen.
Anna-Mari Rusanen is a philosopher of science specialising in AI and cognitive research. She works as a university lecturer in cognitive science at the University of Helsinki and as a senior specialist in AI research at the Ministry of Finance. She investigates the description of the processing of information carried out by smart systems as well as the consequences of algorithmisation and AI development relating to society and ethics.