Can artificial intelligence be sued? Who is responsible when AI makes the decisions?

In the Middle Ages, even animals could be brought to justice. Will artificial intelligence be held responsible for its doings in the future?

A Finnish-speaking man living in a rural area was denied credit when he tried to buy building supplies. Reason given? He was a Finnish-speaking man living in a rural area.

The decision was based on a statistical points system, which was said to be discriminatory by the Anti-Discrimination Tribunal of Finland. However, similar systems are in use elsewhere. If processing the data is left to artificial intelligence, the scoring criteria might simply be unintelligible.

In recent years, few things have generated as much anxiety as AI. It arouses both hopes and fears. AI is already here, and many questions regarding its use have yet to be answered.

The Finnish National Artificial Intelligence Programme claims that Finland is striving to become a leader in the application of AI. Algorithms are already being used in the EU in a variety of ways to assist in decision-making, according to a report published in January 2019 by the NGO AlgorithmWatch.

It can affect anyone

Minna Ruckenstein, assistant professor at Helsinki University Centre of Consumer Society Research, was a member of the team responsible for Finland's contribution to the report. In her opinion, as citizens we’re on shaky ground –ethically, politically and judicially– when it comes to issues linked to artificial intelligence.

The report also discussed the case of the man who had applied for credit to buy building supplies. According to Ruckenstein, it is a prime example of a situation where AI makes an evaluation based on statistical, rather than personal, data.

The credit company found no information in their database or in the credit information register about the man who applied for credit for his online shopping. This being the case, the credit scoring system defaulted to an evaluation based on his age, gender, place of residence and native language.

“This case shows that anyone can become subject to discrimination. However, it typically affects the disadvantaged in society,” Ruckenstein points out.

AI and power

Over the last four years, the Finnish government has passed dozens of new regulations to facilitate the digitalization of public administration. In the near future, AI will be used to assess whether a person is in danger of social exclusion based on their health records. There are also plans to implement AI in employment administration, immigration services and electricity utility billing.

“The intention is good, but at the same time AI is providing tools for the totalitarian use of power. In Finland, people believe that technology is a stairway to heaven, and the discussion has been too narrow in its focus,” says Tuomas Pöysti, the Finnish Chancellor of Justice, speaking at a “Justice and Digitalization” event organized by the Finnish Lawyers’ Association at the beginning of 2019.

At present, algorithms do not make decisions autonomously - their suggestions must be underwritten by a human. However, the person held responsible might not have the time or skills to familiarize themselves with the matter in hand. For example, according to the AlgorithmWatch report, employment administration officials in Poland revised decisions made by algorithms in fewer than one per cent of cases.

Do people really understand?

In 2018, Maija Sakslin, the Finnish deputy ombudsman responsible for taxation issues, demanded an investigation into the Tax Administration’s automated tax decisions, as she suspected them of compromising good administrative practices and the right to due process. Tax counsellors answering customer service telephone calls were unable to explain decisions made by software robots.

Rule-based algorithms work by coding expert knowledge into the program. All possible permutations are entered into the system, and the computer program is taught how to react to different situations. This type of AI has been in use since the 1980s, and the logic of its decisions is clear and easy to trace.

Today, however, newer machine-learning AI teaches itself based on massive amounts of data. It derives rules from the data, and compares new cases to old ones, thus trying to make the best decisions based on previous knowledge. Its logic might be hard to unravel in a way that makes sense to the individual.

Avoiding responsibility

Raul Hakli, a research fellow in the Department of Philosophy at the University of Helsinki, thinks the risk is that we will try to avoid responsibility by hiding behind AI: “We hide behind claims like ‘Oh too bad, that’s what the computer decided.”

Hakli, and his colleague Pekka Mäkelä, have been considering whether AI should be held accountable for its decisions. In an academic article published in The Monist, they reach a firm conclusion: under no circumstances should responsibility be ceded to software robots for their actions, no matter how they might evolve in the future.

Moral responsibility implies a measure of control over one’s actions. This is something AI does not have: it has been programmed to fulfil the goals set by its programmer, goals which it does not in fact have. Furthermore, AI does not respond to disapproval, praise or encouragement.

Humans are slow and stupid

Responsibility for any blunders committed by AI lies with the programmer if they make a mistake, or with the user, if they misuse the program. It can also lie with the society that has approved the use of these systems. And sometimes nobody is to blame; rather it’s an accident, plain and simple. Cases like these call for indemnity insurance.

Hakli, a former computer science student who has since moved into philosophy, understands well the desire to increase the use of AI. It is easy and cheap, effective and productive, and is not subject to human error.

“We humans are slow, lazy and stupid. But as with the adoption of any new technology, we still need to consider whether it will make the world a better place.”

Deterrents could work

Even if AI cannot bear moral responsibility, could an action be brought against it in a court of law? If the answer were ‘yes’ and it was found guilty, how would it be punished?

Visa Kurki, a postdoctoral researcher in law from the University of Helsinki’s Collegium for Advanced Studies, considers the conundrum: “Traditionally, punishment has been seen as an example of atonement or a deterrent. Atonement means nothing to AI, but deterrents could maybe even work.”

A stock trading algorithm will use any means necessary to achieve the goal of generating maximum profit. If the legislator wants to prevent the algorithm from utilizing inside information, they should impose a fine that is a multiple of the profit gained.

Nowadays trading algorithms are basic tools in the banking industry, which means responsibility for the algorithm-initiated trades lies with the banks. On some occasions, deals have even been cancelled when algorithms suddenly appear to function strangely. In the United States, however, there has been speculation as to whether algorithms could be separated from the bank to form an independent company, which could be declared bankrupt if necessary. That would leave nobody responsible for the algorithm's decisions.

The Rights of Nature

Visa Kurki would like to change the way we define the concept of a legal person. Currently, the Finnish law only recognizes natural persons, i.e. people, and legal persons, such as corporations or associations.

"It is often thought that only natural persons and legal persons have rights. In reality it’s far from simple."

Our view of rights, and who or what are entitled to those rights, has altered over the years. It was only gradually that women and slaves secured their rights. And children are heard in court today a great deal more than a few decades ago. Animals have the right to be well treated. In New Zealand, even a river was granted legal personhood.

Mice on trial

Things have changed over time, including our sense of who can be charged with a crime. The Middle Ages saw some peculiar animal trials. In France, mice were sued for eating the grain. They even had a defence attorney, who had to answer the judge’s inquiries as to why the defendants were absent.

Kurki divides legal persons into active and passive. Those in the passive category have rights but no duties. The active category have both, in addition to being able to function in court.

Children and animals cannot stand trial because they cannot be presumed to understand the consequences of their actions. Sensate beings need to be protected from a punishment that they do not deserve.

AI on the other hand does not experience suffering. That is why Kurki would not give AI constitutional rights. Instead, it could be thought of as a responsible actor under certain circumstances.

Estonia goes its own way

Beata Mäihäniemi, a legal scholar, has investigated how various countries legislate for AI. Most countries aim for technological neutrality. However, Estonia has chosen a different path.

“Estonia is drafting a special law which would grant AI distinct legal status. That means rights and accountability,” says Mäihäniemi.

A more typical response however is to strive to regulate the use of AI by changing administrative law. At the EU level, light-touch regulation and ethical guidelines are being developed for exploiting AI and industry guidelines for product liability.

Mäihäniemi would not leave decisions that involve deliberation of any kind to AI. And predictive algorithms can clearly invade people’s privacy.

In the Espoo municipality near Helsinki, use was made of algorithms that predicted the likelihood of a child being in need of child protection services. In Mäihäniemi’s opinion this raises ethical issues, even if people would consent to its use and benefit from it.

Discriminatory AI

The principles of the EU data protection regulation serve as rules governing the use of data. On the other hand, data protection is of no use if you consent to hand over your data whenever you look for a job or an apartment.

Administrative decisions are about exercising official authority, and as such should be strictly regulated. In the private sector, these rules are more loosely observed. Some corporations use AI in a discriminatory way when issuing insurance and granting loans.

“Frankly, it is a bit frightening,” Mäihäniemi says.

In her opinion, the law should compel companies to reveal, at least upon request, what data has been used and how it has been weighted.

Dirty data

With AI based on machine learning come concerns about the quality of the data. By using the solutions of previous cases to learn, it could also replicate any errors the human decision-makers might have made.

“The danger is that the distortions get coded into the systems and get ‘baked’ into the algorithms, and people will no longer be conscious of them,” says Raul Hakli.

A good example of this is COMPAS, AI system used in the United States to assess the risk of recidivism, which determines whether the accused can be released on bail pending trial. The AI put black people in prison more often than white, although it was not permitted to use skin colour as a basis for its decisions.

“The data should be purged of discriminatory elements, but it is difficult, because those elements could correlate with other variables,” Hakli says.

France’s constitutional council, Le Conseil constitutionnel, set out from the premise that the decisions of an algorithm must be decodable by, and understandable to, a human, failing which the use of algorithms is prohibited. In the case of AI that uses machine learning, achieving this decodability is difficult.

“The problem could be solved by programming the algorithm to determine which variables should have been different for the decision to have been different,” Visa Kurki says.

Human rights must not be infringed

Goals are at the core of machine learning. A system can come up with surprising methods to reach its goal.

“We must think carefully about what is being developed, what we grant permission to use, and what kind of world we want to live in,” Hakli says.

The increase in automation affects everyday life in many ways. Skills we do not use will atrophy. This shows in our memory, navigation skills and social skills. We are dependent on technology.

The basis of the relationship between humans and AI should be the inviolability of human dignity.

“Human liberties and our place in the legal system must be protected,” Tuomas Pöysti adds.

It should not be beyond the realms of possibility. Ultimately, the responsibility for how we use AI lies with us.

The article has been published in Finnish in the 3/2019 issue of Yliopisto magazine. The original Finnish text was translated by undergraduates of the Department of Languages, English philology, under the supervision of John Calton and Nely Keinänen.