“As artificial intelligence advances, humans behaving like machines will be a bigger problem than machines being human”

Robots have already replaced humans in some jobs. According to a researcher, the development of artificial intelligence will change the way we understand humanity. For example, our capacity for empathy and love may decline.

What do robots have to do with how we think about ageing?

This is the kind of question that has been on the mind of Docent, Academy Research Fellow Aku Visala lately. In his latest research, Visala has focused on the ways the development of robotics and artificial intelligence will change our understanding of humanity. He specialises in questions regarding the evolution of consciousness and morality.

As artificial intelligence develops, Visala believes a decline in human interaction may become a problem. This is particularly the case if we view interaction mainly from the perspective of how technology could be used to mechanically replace humans in a variety of duties.

Visala points to elderly care as an example. Deciding what kinds of social resources to invest in the respectful care of the elderly is a moral and political issue.

 “Instead of considering the needs of the elderly holistically, we may choose to house them in facilities where they are cared for by robots instead of humans.”

 “Is it acceptable for a robot caretaker to use facial recognition to identify whether the patient is sad or angry and then respond with a sigh or statement that has the appearance of empathy? This means we are offering a simulation of care and empathy, not the real thing,” says Visala.

Visala says this is not an apocalyptic dystopia, but a human worry of how the limitations of artificial intelligence change our moral concept of humanity and values.

 “Do we want duties requiring interaction to be transferred from humans to things that react but cannot care or feel responsibility?”

Technology is not morally neutral

Visala also finds it problematic to use technological solutions for jobs that require moral efforts: if the elderly are cared for by robots, their children don’t have to interrupt their careers, and spend both time and money to care for their parents.

 “If we outsource care to machines, we are denying ourselves and others the opportunity for moral growth and commitment. In addition, this may alter our concept of what is considered a morally worthwhile goal.”

The idea itself is not new. For example, in the 1960s, the French philosopher Jacques Ellul pondered the effects of a technological mindset which, when focused only on material gain, begins to dictate what kinds of things we consider worth pursuing. Technology is not morally neutral – it can have an impact on what we consider important.  

For a 2010s example, consider the sex robot. Instead of criticising the idea that, in order to be lovable, a person must fit certain standards of beauty and sex appeal, we try to offer technological solutions to people who fall short of the conventional ideal.

 “Instead of sex, which is interaction between two conscious people, complete with commitments and responsibilities, this would essentially be masturbation with a sex robot,” states Visala.

What is the most valuable form of intelligence?

The ways technology changes human interaction is a favoured topic of fiction. It is also a central theme in the film Blade Runner 2049, the 2017 sequel to the 1982 classic.

 “It’s interesting that the most human characters in the film are robots, while the humans are emotionless sociopaths. This is probably because humans have gotten accustomed to treating the human-like beings like trash. They have since begun to treat each other like trash as well,” Visala opines.

In such visions, the problem is not that machines become too much like humans, but that humans become like machines – creatures who do not care about one another and cannot treat each other with humanity.

What if we could develop a socially and morally intelligent AI? Would that even be possible, and what would Visala think the role of such a robot would be in a human community?

 “First of all, we must consider what we think of the ideal kind of human intelligence. Is it logical problem-solving, physical performance or dealing with emotions?

The last one mentioned is the most challenging. From the perspective of interaction, humans are moral creatures who live within a culture and who have language. People react to invisible normative expectations and rules.

 “I would be very impressed if a robot could tell that I was lying and would convince me that lying is a bad idea because it is wrong and has consequences,” says Visala.  

Do unto techno sapiens as you would onto homo sapiens

According to Visala, the study of technology has traditionally had little appreciation for ethical considerations surrounding AI. But it is a relevant area.

 “When weapons of mass destruction were being developed, there was considerable political and ethical debate regarding how we live and how we see ourselves. We should be doing the same now, before we develop artificial intelligence with no rules or agreements up to a point that it can be used for anything.”

Visala believes that we should treat robots, techno sapiens, in a human manner, if they are functionally similar enough to homo sapiens. Consciousness and free will are often considered to be a requirement for human treatment that respects the ambitions of others.

The lack of consensus on the concept of humanity even in the case of homo sapiens is, however, a major challenge .

 “There is no academic consensus on what it means for humans to have free will and responsibility or awareness. Neither do we know how moral issues should be primarily approached: from the perspective of rules or consequences,” Visala explains.

The bar should not be set too high in the division of human and machine.

 “If there is a creature who seems to be behaving sensibly and is capable of morally independent decisions and self-direction, we should treat it as a human just to be sure.”

Our current artificial intelligences and robots are far from a human level of capacity and responsibility, particularly in terms of morality and self-direction. They cannot serve as members of the human moral community. In theory, it is possible for artificial intelligence to also develop significantly in these areas.

 “We humans have an unfortunate history in that, if something is unfamiliar, we deem it subhuman. This is how we have treated the developmentally challenged, for example,” Visala points out.

More about the subject: Data science news