Digital immortality: balancing innovation with responsibility

In one of the previous posts on this blog, Tal Morse characterized digital immortality as a phenomenon positioned between ‘technological advancement’ and ‘social hesitance’, emphasizing that the development of immortalization strategies offered by the industry does not fully align with societal needs. This tension has been evident since the early days of the digital afterlife industry. However, it's important to note that we've also developed adaptive mechanisms, which influence our perception of digital immortality over the years.

Rapid advancements in generative AI, particularly in natural language processing (NLP), have made digital immortality services today even more accessible and ‘democratized’, only strengthening that tension. Unlike in the past, when creating simulations of deceased loved ones required specialist skills and a significant budget, as portrayed in popular media stories featuring individuals like James Vlahos, Bina Rothblatt, or Roman Mazurenko, today, nearly anyone with Internet access and some basic know-how can revive a deceased loved one, as evidenced by numerous cases in China and the United States.

In light of this acceleration, it is crucial to go a step further to consider how this ‘social hesitance’ can be translated into concrete design and policy solutions. This will aid in establishing protections for all parties involved in digital immortality services, parties that may (and quite often do) have different needs, views, and expectations regarding their own or others’ digital immortalization.

Digital afterlife industry: a high-risk AI area

Drawing inspiration from existing ethical frameworks (Floridi & Öhman, Lindemann, Harbinja), here at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, Dr. Tomasz Hollanek and I have begun to explore potential protective mechanisms that could be implemented at EU level to establish necessary safety standards for the digital afterlife industry. The currently discussed EU AI Act - the first regulatory framework for AI in the EU - categorizes AI systems based on the varying levels of risk they may pose in predefined application areas. This classification ranges from 'unacceptable risk,' referring to systems with evident harmful implications, such as cognitive or behavioral manipulation, to 'minimal risk,' which includes solutions like email spam filters with relatively limited negative social effects.

In our forthcoming paper, we argue for an acknowledgment of the digital afterlife industry as a high-risk area of AI application. It's important to clarify that labeling the digital afterlife industry as a high-risk doesn't entail an outright ban on these systems. Instead, it calls for stringent monitoring of their deployment and the establishment of specific safety standards for system providers to prevent or mitigate potential negative social consequences. To illustrate these consequences, in our paper we developed three hypothetical yet plausible scenarios for AI-enabled simulation of the deceased, taking into account the viewpoints of both the data donor and the data recipient. We use these terms to refer respectively to those whose data is used to construct posthumous avatars, and those who are intended to engage with the final service of digital immortalization. 

In one of the scenarios, we draw attention to a particularly vulnerable group of recipients of digital immortality systems, namely, children. While currently, none of the companies directly target children, it is feasible that services designed to support the young coping with the loss of a parent, for example, may appear in the future. Considering the lack of research on the influence of such systems on children's psychology and well-being, we argue that special measures should be taken to protect them in these particularly difficult circumstances.

The other two scenarios center on issues of dignity and consent related to the use of digital remains, encompassing the interests of both data donors and data recipients. Within the current regulatory framework, it is technically possible to create a postmortem avatar of a person without their explicit consent. The situation is ethically problematic in itself. However, when we consider the additional layer of financial motives behind recreation services, postmortem avatars can become a new platform for advertising products, potentially encroaching upon the dignity of data donors. Conversely, it is also possible for a data donor to create a postmortem avatar without the explicit consent of the recipients of the technology, which also can be problematic. As an illustration, a father can virtually recreate himself, leaving the simulation as a 'farewell gift' for his adult children, who may not be prepared to process their grief in this manner. We argue, therefore, that the rights of both stakeholders – data donors and recipients – should be equally safeguarded, and various protocols should be in place to address their different potential needs.

Guidelines for responsible development

Alongside mapping the potential negative repercussions of the unrestricted deployment of AI-enabled simulations of the deceased, in our paper, we present a list of recommendations for the responsible development of the digital afterlife industry. These recommendations include, among others, ensuring meaningful transparency in digital immortality services, upholding the principle of mutual consent from both data donors and recipients for participation in digital immortality projects, and restricting access to recreation services to adult users.

New technologies invariably bring new risks alongside innovative solutions. As Mark Coeckelbergh aptly stated in his book Human Being @ Risk. Enhancement: "One hundred years ago, there was no risk of nuclear disaster; fifty years ago, there was no risk of computer viruses, let alone cybercrime; and twenty years ago, you could not be killed by a drone." We can further expand this list by saying that, just a decade ago, digital immortality was a topic of little concern. However, today, the situation has changed, and the pressing question is: How can we effectively mitigate the risks posed by the potential wave of simulations of the deceased enabled by generative AI systems?

 

Further reading:

Tomasz Hollanek, Katarzyna Nowaczyk-Basińska. 2024. Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry. Philos. Technol. 37, 63. https://doi.org/10.1007/s13347-024-00744-w 

Nora Freya Lindemann. 2022. The Ethics of ‘Deathbots.’ Science and Engineering Ethics 28, 60: 1–16. https://doi.org/10.1007/s11948-022-00417-x

Carl Öhman, Luciano Floridi. 2018. An Ethical Framework for the Digital Afterlife Industry. Nature Human Behavior 2: 318–320. https://doi.org/10.1038/s41562-018-0335-2