How do you punish a robot?

Who’s responsible if a care robot or another intelligent machine makes a fatal mistake? Cognitive sciences are taking a new look at robotics, from the perspective of moral psychology.

Nearly all of us have devices in our pockets, making decisions for us. However, analysis of the moral issues generated by these machines is just getting started.

Researcher Michael Laakasuo from the Faculty of Behavioural Sciences leads the Moralities of Intelligent Machines project which is taking part in the Helsinki Challenge competition, held to celebrate the University of Helsinki's 375th anniversary.

Laakasuo’s project focuses on the emotions evoked by robot actions. For example, study participants are shown videos in which a robot does something wrong or makes a fatal mistake. The battery of moral-psychological questions that is posed to the participants afterwards seeks to determine the degree to which the viewer would accuse the robot of wrongdoing. What kinds of moral decisions do we expect of machines?

“After witnessing a mistake, we may want to punish the robot. But how do you punish a machine? By kicking it to pieces?” asks Laakasuo.

Rise of robotic power

It’s easy to anthropomorphise robots, as in the film I, Robot, in which a robot must choose whether to rescue an adult or a child.

It seems that the more human a machine appears, the higher is the level of accountability ascribed to it. And when the machine is seen as human, it may be considered to be guilty of malpractice, for example, instead of its programmer.

However, a robot is not a moral subject, as it has no self-awareness or emotions.

In our digitalised world, robots hold an increasing amount of power. They make many of our decisions.

“Who looks for a street on a map these days when your GPS can pinpoint your location? In the Internet of Things, machines can plan your weekly menu, give you fashion tips, and so on.”

People become complacent

“What happens to chance in a world where everything is controlled?” Laakasuo asks.

How self-indulgent and complacent will we be once we no longer have to make our own decisions?” Laakasuo fears for creativity.

The Moralities of Intelligent Machines team, consisting of experts from the discipline of cognitive sciences intends to determine what kinds of moral expectations people have of robots.The international group led by Laakasuo features doctoral student Marianna Drosinou, assistant professor Markus Jokela, doctoral student Nils Köbis, post-doctoral researcher Jussi Palomäki and researcher Mikko Salmela.

Watch the project presentation on YouTube

Helsinki Challenge