In late 2020, Professor Teemu Roos participated in NeurIPS, the most esteemed conference in his field. It is a gathering of top artificial intelligence researchers around the world.
For the first time, the participants were required to evaluate the broader social impact of the applications they were going to present at the conference. This proved controversial. Some researchers viewed the requirement as censorship or as needless politicisation of science.
“Bridges were burned. I cannot remember ever having such passionate conversations about topics in my field before. The issue was deeply sensitive,” Roos says.
Roos strongly believes that researchers and engineers cannot wash their hands off the impacts of their inventions. Today, nearly everyone is in contact with artificial intelligence, or AI.
We encounter artificial intelligence everywhere. Online stores, streaming services, recommendation algorithms, search engines, social media and online dating applications all use it. Even if you try to opt out of algorithms, your data will still be collected and processed by the artificial intelligence systems related to public administrative bodies.
“We have seen so many adverse effects from AI applications, both intended and unintended. Algorithms have recommended lower-paying jobs for women, and facial recognition software has failed to recognise people of colour. These kinds of phenomena mean that we have to consider the broader context,” says Roos.
Provocative and addictive
Artificial intelligence can make our lives easier, help us work more efficiently and support us in decision-making. It is able to select precisely the content that we need from the overwhelming abundance and chaos of big data. It is automated data processing in its truest form.
However, many people do not realise that whenever an algorithm recommends something, it is also excluding something else. It makes decisions based on its code and the data on which it has been trained, not according to some objective law of nature.
In the worst-case scenario, algorithms can encourage political extremism and lead to the polarisation of societies. Not because anyone intended it, but because the goal of all social media platforms is to maximise the amount of time we spend on them. As we are often drawn to provocative content, social media algorithms offer us precisely that.
Sometimes the addictive content may be misleading or false, or even sheer hate speech. The persecution of the Rohingya minority in Myanmar escalated into genocide because of fake news spread on Facebook. The 2016 presidential election in the United States was influenced by voter manipulation conducted via Facebook.
According to Roos, the transformation of how we seek information and the bubbles generated by social media are among the most profound impacts of artificial intelligence on society.
“If an algorithm promotes extremist content and false information, it has a tremendous impact. We need to learn to control recommendation mechanisms and mitigate their adverse effects.”
Unfortunately, such mitigation is currently not profitable for social media platforms.
Cats and mice
Although it would be beneficial to consume content that challenges our views, our minds operate differently. The human brain cannot process all the alternatives, which makes us filter out the information that does not support our thinking. This is the foundation of social media algorithms.
“Social media algorithms boost the basic features of our own information processing systems. People are lazy when it comes to seeking out opinions they disagree with,” says Anna-Mari Rusanen, a university lecturer in cognitive science.
Even animals become addicted to content that they enjoy. Rusanen tested this on her cat, who as a result would now love to play mouse-catching games on YouTube all day. The cat will wait patiently in front of the TV until Rusanen starts the game, and then swipes at the mice with its paws, trying to get inside the screen.
“YouTube constantly presents us with content that we enjoy, which appeals to the weakest links in the human cognitive system. The mechanism is surprisingly effective as it even works on animals. Do parents understand that the only objective of YouTube Kids is to have their children spend as much time on the platform as possible?”
Only some decades ago, almost all Finns turned to the national broadcasting service Yle for their news. Those who wished to base their worldview on the newspaper of their chosen political party had to make a deliberate choice to do so.
“Now we are no longer able to choose, as social media just blasts at us with content that it considers appealing to us. We are drifting in a stream of information that we do not control. If we do not use any other sources of information, constantly consuming similar material will begin to shape the way we think,” says Rusanen.
10,000 people or a single bot?
On social media, people often write things that they would never say out loud. When people who share similar beliefs are brought together, they can egg each other on. In their bubble, they may think that the whole world agrees with them.
“Twenty years ago, we believed naïvely that the internet would connect people and allow us to have conversations with anyone in the world. Instead, we have wound up in cliques,” Petri Myllymäki, Professor of computer science laments.
Some of the participants in online debates are bots – pieces of software designed to promote a specific issue. They pose as people and may push their views aggressively.
“It is not fair to allow people to think that there are 10,000 people who agree with them if in reality there is only one. At the moment there is no regulation on bots,” Myllymäki says.
Myllymäki thinks that the Wild West of the internet needs a sheriff. The digital environment is a part of society, and the same values must be respected both offline and online.
“If someone does something wrong, it is wrong regardless of the technology they use. We have laws and values that determine that no one should be discriminated against and that personal privacy must be protected. In the physical world, we take these things for granted.”
Tech giants hoard data
Social media platforms and search engines often require that users share some of their personal information. This data is then passed on from one service to the next. If you search for a washing machine on Google, ads for them will soon be popping up on your Facebook feed. This might be convenient, or annoying. In Myllymäki’s opinion, the services should at least ask people whether they consent to their data being shared with other parties.
Most people consider Google synonymous with the internet. However, Google is a private corporation that can remove other companies or even nation-states from its service if it wishes to do so.
Technology giants have power and money. Some of them offer governments public-sector AI applications for free if they in turn release data on their citizens. Myllymäki thinks this is a bad idea – he would rather see public administration developing their own artificial intelligence applications.
“We have already handed over most of our social lives to technology giants, but they should be kept away from societal data, such as information relating to health, education and transportation. We need to stay alert and not repeat the same mistake,” Myllymäki says.
We are used to thinking that state-of-the-art AI applications demand substantial amounts of data, money and electricity. According to Myllymäki, this does not have to be the case. The environmental impact of intense energy consumption is well known. It is possible to build artificial intelligence systems for companies and the public sector using less data and money.
“Efficiency is well aligned with the Finnish mentality. For decades, Finnish researchers have been studying artificial intelligence solutions that would use smaller amounts of data. We are pioneers in this field.”
The algorithm and the lady next door
Artificial intelligence is already in use in public administration. Increasing amounts of data are constantly being collected on us and used for making decisions on, for example, taxation and social benefits.
Some health care districts are collecting social and health data to predict risk. A pilot project in the South Karelia Social and Health Care District used data to evaluate whether families required child welfare or mental health services.
“If health registries can be used to predict risk, it benefits both the individual and the health care provider. But how much access to people’s private lives should the government have, even with the best of intentions?” Anna-Mari Rusanen asks.
Artificial intelligence brings up the age-old questions of autonomy and the relationship between the government and citizens. How does the government’s supervisory role impact individual freedom? On the other hand, can the government be accused of negligence if it fails to intervene despite having access to data that points to a problem?
“One of the core questions is whether it makes a difference if the need for child welfare services is evaluated by data analytics instead of the lady next door,” Rusanen says.
According to Rusanen, we are living at a time when new technology is forcing us to consider why we do the things we do. Our very social system has to find justifications for itself.
The European way of thinking has traditionally maintained that people have the right to privacy. This concept goes back as far as the 17th century. It reflects the prevailing individualism as well as the idea of fundamental rights that a government cannot infringe upon.
Abuses are difficult to prevent
Applications designed with the best of intentions can also be abused. For example, image recognition is improving the automatic braking systems in cars, but it can also be used for ethnic profiling.
The software designer cannot choose how the application will be used. In the artificial intelligence business, knowledge is already technology. It is possible to regulate technology used in nuclear weapons, as the associated hardware is large, but it is difficult to prevent bits of code from slipping across national borders.
Should the development of certain applications be banned on the basis that there are countries where they could be used to commit human rights violations? Petri Myllymäki does not think so. Such a ban would not prevent the development of similar applications elsewhere.
“Regulation often takes the form of bans, but they often also wind up preventing beneficial uses of the software. I would prefer rules that still enable innovation. The EU has a reputation for banning everything, which lets China and the US happily take the lead instead,” Myllymäki says.
In contrast, Teemu Roos thinks that the EU can be credited with many positive achievements. He believes that the 2016 General Data Protection Regulation has worked surprisingly well.
“The EU has power. It is also the only entity genuinely interested in the topic,” Roos says.
Companies will operate according to the mechanisms of the market until politicians and citizens set the limits. As the field is highly international, the legislation cannot be limited to the national level.
“Companies regulating themselves is not enough. We have known this for years. Environmental legislation would not have advanced at all if we had left it to the big oil and gas companies,” Roos points out.
EU debates the terms
Last spring, the European Commission released its proposal for a new artificial intelligence strategy. The ambitious legislative package proposes a ban on AI-led mass surveillance while setting conditions for AI applications dealing with, among other things, education, recruitment, public services, and law enforcement.
“It is good that a framework is being set for artificial intelligence that respects human rights. However, the package has been criticised for being too heavy and complicated. Is the EU using a sledgehammer to do a job that calls for a screwdriver?” Anna-Mari Rusanen asks.
In addition to working as a university lecturer, Rusanen serves as a senior specialist at the Ministry of Finance in the department responsible for the digitalisation of public administration. She assists in drafting legislation and interpreting EU regulations.
According to Rusanen, the Commission’s proposal is appropriate in its goal to regulate the use of AI instead of the related technologies. However, the definition of artificial intelligence is still problematic: the list of AI systems in the proposal is already narrow and incomplete.
The EU is expected to release the second part of the legislative package later this autumn that focuses on liability for products and damages. Nevertheless, it will take years before the rules can take effect.
“This is a slow process which will evoke considerable debate as well as intense emotions, lobbying and activism,” Rusanen says.
Free will at stake?
Artificial intelligence researcher Timo Honkela (1962–2020) developed the concept of the Peace Machine, an artificial intelligence solution that would help resolve conflicts. Artificial intelligence can also be used by doctors to help identify tumours, or by engineers to plan road repairs. It can significantly improve our decision-making.
However, most of us still want the final decision to be made by a human. But is this the case, even when a human signs off on a decision?
When artificial intelligence proposes a solution, humans will often agree. According to a report by AlgorithmWatch, unemployment officials in Poland only changed the decision proposed by AI in less than one percent of all cases. This can either mean that the algorithm is superb, or that humans are lazy.
Our decisions are becoming less and less independent. This may be impairing our ability to make decisions. Typically, any skills we do not exercise begin to deteriorate. If you always use a map application to navigate, you may not be able to find your way without it.
So, is artificial intelligence our master or servant? A useful helper or a sly manipulator? The key is whether we use artificial intelligence in a controlled manner with the understanding of its inner workings, or whether we let it restrict our freedom of choice and free will.
Instead of the threats and risks, Anna-Mari Rusanen prefers to focus on the good that could be done for the human race and the whole planet with the help of artificial intelligence. The direction of the development is in our hands. The ethics of AI, a course coordinated by Rusanen, ends with the hopeful quote from Aristotle: “Choice, not chance, determines your destiny.”
If you want to be the master of artificial intelligence, learn to understand it. Then you may be in a position to choose where it should and should not be used.
The article has been published in Finnish in the 6/2021 issue of the Yliopisto magazine.
There are two types of artificial intelligence. Traditional algorithms are fed with all possible options and taught which one to choose in each case. Meanwhile, a machine learning algorithm can absorb huge datasets and learn on its own. It can condense the given data into rules, and compare new cases to old ones, trying to make good decisions based on previous knowledge.
It was long thought that machine learning is neutral, as it is unaffected by the mood and opinions of the person making the decisions. However, while the algorithm itself cannot discriminate, its decisions may do so, if historical, biased data has been used to train it.
“An algorithm cannot pick out the moral norms from data, or correct existing biases. We have to teach it the rules of our society separately,” says Associate Professor Indrė Žliobaitė, who created the world’s first university course on fair machine learning.
The guiding principle when training the algorithm is to tell it in unambiguous terms what is expected of it. Even though it can learn on its own, the learning process can be guided by setting more precise conditions. The impact of different variables on the outcome can be tested and the variables optimised to reach the desired results.
It may be impossible to clean data of bias if it reflects centuries of cultural discrimination. However, the algorithm can be told that there is an equal number of women and men in the world, even if men might be the majority in historical recruitment data.
The processes of machine learning can be certified just like any other work processes. Žliobaitė is a proponent of setting standards, like those of ISO, for socially sensitive applications to ensure their quality.
“Algorithms are a part of our culture now, just like fire and tools. They change how we learn, just like planes changed the way we travel,” Žliobaitė muses.