In recent years, educators have started to
Haunted By Bots
I’ve been thinking about this a lot lately, partly because it makes me feel old, but also because it relates to work my colleague
Much has been written about the risk of these technologies to the grieving processes of the living (e.g. Lindemann 2022). In our earlier work (e.g. Buben 2015; Stokes 2021), Adam and I have argued that IPCDs also pose at least two different kinds of risk to the dead. Firstly, there’s a risk that we reduce the dead to mere resources – what Heidegger called ‘standing reserve’ – for the living. Secondly, the risk is that we’ll use those resources to replace, rather than commemorate the dead. Replacement may make the bereaved feel better, perhaps, but it comes at a steep moral cost: treating the dead, and indeed the living as well, as replaceable.
Sceptical Responses
However, these worries are premised on the idea that IPCDs will be able to replace the dead on an experiential level – that is, that we can experience sufficiently well-formed chat bots as being the dead. When I describe this scenario to people, many are deeply sceptical. Perhaps, they say, we can indeed develop bots that would be convincing in all the relevant ways. Yet we wouldn’t forget that the dead are dead – indeed, we cannot forget that the dead are dead – no matter how lifelike the bots we make of them are. We are not going to be like the (apocryphal) 1890s audiences seeing cinema for the first time and fleeing in terror from footage of an approaching train.
There is certainly empirical evidence for this objection. Tal Morse,
Adapting to New Technologies
Yet it’s also the case that humans adapt and re-embody ourselves to new technologies in ways that do not leave our concepts unchanged. A generation that grew up with telephones came to hear the voice of the other person through the earpiece and not simply a mechanical reproduction thereof, and the first generation to grow up using Zoom regularly can be expected to have fewer qualms about telepresence than their forebears.
Elsewhere (Stokes, 2021), I’ve used David Oderberg’s notion of ‘telic possibility’ (Oderberg 2012) to describe the way in which we experience digital assistants like Siri and Alexa as if they might as well be real people. It’s not that we think Siri is a real person; it’s that for the purposes for which we talk to Siri, it simply doesn’t matter. The question is whether all our interactions with bots, including ghostbots, could become like that. Like those Gen Z students who no longer think in terms of file locations, it’s entirely possible that generations that have grown up around synthetic agents will no longer use the conceptual distinction between the dead person themselves and their digital simulacrum.
The concern here, then, is not so much that we might come to treat deathbots as if they really are the person who has died instead of a replica. The real danger is that we will stop caring about the difference at all.
References
Buben, Adam (2015), 'Technology of the Dead: Objects of Loving Remembrance or Replaceable Resources?', Philosophical Papers, 44 (1), 15-37.
Lindemann, Nora Freya (2022), 'The Ethics of ‘Deathbots’', Science and Engineering Ethics, 28 (6), 60.
Morse, Tal (2024), 'Digital necromancy: users’ perceptions of digital afterlife and posthumous communication technologies', Information, Communication & Society, 27 (2), 240-56.
Oderberg, David S. (2012), 'Disembodied Communication and Religious Experience: The Online Model', Philosophy & Technology, 25 (3), 381-97.
Stokes, Patrick (2021), Digital Souls: A Philosophy of Online Death (London: Bloomsbury).