In one of the previous posts on this blog, Tal Morse characterized digital immortality as a phenomenon positioned between ‘technological advancement’ and ‘social hesitance’, emphasizing that the development of immortalization strategies offered by the industry does not fully align with societal needs. This tension has been evident since the early days of the digital afterlife industry. However, it's important to note that we've also developed
Rapid advancements in generative AI, particularly in natural language processing (NLP), have made digital immortality services today even more accessible and ‘democratized’, only strengthening that tension. Unlike in the past, when creating simulations of deceased loved ones required specialist skills and a significant budget, as portrayed in popular media stories featuring individuals like
In light of this acceleration, it is crucial to go a step further to consider how this ‘social hesitance’ can be translated into concrete design and policy solutions. This will aid in establishing protections for all parties involved in digital immortality services, parties that may (and quite often do) have different needs, views, and expectations regarding their own or others’ digital immortalization.
Digital afterlife industry: a high-risk AI area
Drawing inspiration from existing ethical frameworks (
In our forthcoming paper, we argue for an acknowledgment of the digital afterlife industry as a high-risk area of AI application. It's important to clarify that labeling the digital afterlife industry as a high-risk doesn't entail an outright ban on these systems. Instead, it calls for stringent monitoring of their deployment and the establishment of specific safety standards for system providers to prevent or mitigate potential negative social consequences. To illustrate these consequences, in our paper we developed three hypothetical yet plausible scenarios for AI-enabled simulation of the deceased, taking into account the viewpoints of both the data donor and the data recipient. We use these terms to refer respectively to those whose data is used to construct posthumous avatars, and those who are intended to engage with the final service of digital immortalization.
In one of the scenarios, we draw attention to a particularly vulnerable group of recipients of digital immortality systems, namely, children. While currently, none of the companies directly target children, it is feasible that services designed to support the young coping with the loss of a parent, for example, may appear in the future. Considering the lack of research on the influence of such systems on children's psychology and well-being, we argue that special measures should be taken to protect them in these particularly difficult circumstances.
The other two scenarios center on issues of dignity and consent related to the use of
Guidelines for responsible development
Alongside mapping the potential negative repercussions of the unrestricted deployment of AI-enabled simulations of the deceased, in our paper, we present a list of recommendations for the responsible development of the digital afterlife industry. These recommendations include, among others, ensuring meaningful transparency in digital immortality services, upholding the principle of mutual consent from both data donors and recipients for participation in digital immortality projects, and restricting access to recreation services to adult users.
New technologies invariably bring new risks alongside innovative solutions. As Mark Coeckelbergh aptly stated in his book Human Being @ Risk. Enhancement: "One hundred years ago, there was no risk of nuclear disaster; fifty years ago, there was no risk of computer viruses, let alone cybercrime; and twenty years ago, you could not be killed by a drone." We can further expand this list by saying that, just a decade ago, digital immortality was a topic of little concern. However, today, the situation has changed, and the pressing question is: How can we effectively mitigate the risks posed by the potential wave of simulations of the deceased enabled by generative AI systems?
Further reading:
Tomasz Hollanek, Katarzyna Nowaczyk-Basińska. 2024. Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry. Philos. Technol. 37, 63.
Nora Freya Lindemann. 2022. The Ethics of ‘Deathbots.’ Science and Engineering Ethics 28, 60: 1–16.
Carl Öhman, Luciano Floridi. 2018. An Ethical Framework for the Digital Afterlife Industry. Nature Human Behavior 2: 318–320.