We are surrounded by artificial intelligence (AI) technologies. Robots hoover our homes and deliver food, sat nav systems patiently redirect us to new routes, and chatbots never tire of answering our questions. Older adults can talk to a therapeutic robot seal, language models generate the desired content in an instant, and we can date our picture-perfect AI partners online.
Is our deepest desire to obtain what we yearn and what pleases us, or are we willing to face the new and unexpected? Two ethics researchers – Professor of Social Ethics Jaana Hallamaa and Postdoctoral Research of Social Ethics Taina Kalliokoski – agree that AI may affect us and change various aspects of our lives, but cannot replace the core of humanity.
Fragility is part of humanity
Could it be easier to expose your imperfections and vulnerabilities to AI rather than another human being? Will the day come when we open up to an AI therapist, and a robot provides intimate care?
Kalliokoski has come across this question when working on the podcast 10 pulmaa tekoälystä (‘10 quandries of AI’).
“If a person’s trust in other people has been broken, it may be easier to trust a machine. AI applications may be suited to fulfilling some needs.”
Hallamaa describes how a car’s sat nav system can be more reliable or at least more patient than a human map reader. Its unflappability can improve the driver’s mood.
“And as opposed to family and friends, a chatbot in a care home never tires of answering the questions of a person with dementia. However, the chatbot cannot return to the past and share the memories of its interlocutor.”
Taina Kalliokoski says it is good to nurture tolerance of fragility and vulnerability as well as the uncertainty involved in all human relationships even if it feels uncomfortable.
“Human relationships are not fixed and closed systems, but always entail a degree of unpredictability.”
AI affects our notions of ourselves and each other
Information systems help us overcome earlier limitations of time and place. We are constantly informed of the latest developments, in the here and now, with the internet always at our fingertips. Hallamaa believes this can make us more impatient, as human relationships do not function at a similar pace. In them, time and place are structured in different ways.
Kalliokoski shares the same view.
“An algorithmic reality may narrow our thinking of what the world is like. Communities may increasingly form distinct information bubbles. I’m also worried about the way AI software intensifies the pressure and aspiration for efficiency.”
Both Kalliokoski and Hallamaa draw attention to the inequalities growing in our increasingly AI-driven society and say these have been inadequately discussed despite their direct link to social stability.
“The more people submit data and accept cookies, the more efficient services they receive,” notes Kalliokoski.
“Permitting access to health-data management systems and databases can provide synergy benefits. But what about the people who don’t want to disclose such information? Can they refuse to do so? Will their value as a person or the services offered to them change if they don’t produce as much data? This should be studied more to allow us to anticipate such outcomes.”
On the other hand, Kalliokoski says that people are flexible and capable of opposition.
“The idea of a digital detox is already being discussed. It shares features with religious asceticism, in which the sense of meaning arises from abstinence and refusal. The banning of mobile phones from schools is now a hot topic. Wise society knows when to take a step back.”
Does AI undermine support for public-sector services?
AI technologies reveal how the roles of government and citizen are perceived. Kalliokoski says that public services are being developed towards everyone looking after their own wellbeing and lives.
“Ultimately, it’s about whether people are seen as a costly or a profitable resource.”
If people cannot access public-sector care services and can only reach a machine or a call-back service, they will seek help elsewhere, if possible. Hallamaa sees a danger here.
“It may reduce willingness to support the public sector by paying taxes. At the same time, scandals in care homes and, for example, the results of school privatisation in Sweden show that private-sector services don’t always work well either.”
Kalliokoski points out that one downside of automation is that individual circumstances are more complex than those anticipated and encoded in AI.
“In comparison, a sat nav system may seem great: it always seeks a new solution to each situation. Maybe similar learning systems could help the ones used in the public sector operate more effectively,” Hallamaa suggests.
What is a good human life?
Both Hallamaa and Kalliokoski view AI-assisted partners with suspicion: people will always yearn for human contact.
Kalliokoski notes that AI has no authentic experiences to share. This is why Hallamaa believes a perfect AI partner may over time become irritating.
“Sure, you can chat with language models too, but they’re always living in the moment. Would a language model be able to provide interesting conversation openers or compelling and even unsettling viewpoints?”
We return to optimisation:
“What would be a good human life? What is our shared vision of a good life?” asks Kalliokoski.
“An AI-assisted fantasy partner, ready to meet all our needs, prioritises efficient optimisation. But I think we humans are bound to each other by our imperfections and the friction between us. All these create a deeper sense of meaning and help us grow as human beings. AI can’t take away or replace that.”