AI can make the dead talk – why this doesn’t comfort us

For as long as humans have buried their dead, they’ve dreamed of keeping them close. The ancient Fayum portraits – those stunningly lifelike images wrapped in Egyptian mummies – captured faces meant to remain present even after life had left the body.

Effigies across cultures served the same purpose: to make the absent present, to keep the dead around in some form.

But these attempts shared a fundamental limitation. They were vivid, yet they could not respond. The dead remained dead.

Across time, another idea emerged: the active dead. Ghosts who slipped back into the world to settle unfinished business, like spirits bound to old houses. Whenever they did speak, however, they needed a human medium – a living body to lend them voice and presence.

Media evolved to amplify this ancient longing to summon what is absent. Photography, film, audio recordings, holograms. Each technique added new layers of detail and new modes of calling the past into the present.

Now, generative AI promises something unprecedented: interactive resurrection.

It offers an entity that converses, answers and adapts. A dead celebrity digitally forced to perform songs that never belonged to them. A woman murdered in a domestic-violence case reanimated to “speak” about her own death. Online profiles resurrecting victims of tragedy, “reliving” their trauma through narration framed as warning or education.




Read more:
Should AI be allowed to resurrect the dead?


We are researchers who have spent many years studying the intersection of memory, nostalgia and technology. We particularly focus on how people make meaning and remember, and how accessible technologies shape these processes.

In a recent paper, we examined how generative AI is used to reanimate the dead across everyday contexts. The easy circulation of these digital ghosts raises urgent questions: who authorises these afterlives, who speaks through them, and who decides how the dead are put to work?

What gives these audiovisual ghosts their force is not only technological spectacle, but the sadness they reveal. The dead are turned into performers for purposes they never consented to, whether entertainment, consolation or political messaging.

This display of AI’s power also exposes how easily loss, memory and absence can be adapted to achieve various goals.

And this is where a quieter emotion enters: melancholy. By this we mean the unease that arises when something appears alive and responsive, yet lacks agency of its own.

These AI figures move and speak, but they remain puppets, animated at the direction of someone else’s will. They remind us that what looks like presence is ultimately a carefully staged performance.

They are brought back to life to serve, not to live. These resurrected figures do not comfort. They trouble us into awareness, inviting a deeper contemplation of what it means to live under the shadow of mortality.

What ‘resurrection’ looks like

In our study, we collected more than 70 cases of AI-powered resurrections. They are especially common on video-heavy platforms like TikTok, YouTube and Instagram.

Given their current proliferation, the first thing we did was to compare all cases and look for similarities in their purposes and application. We also noted the data and AI tools used, as well as the people or institutions employing them.

A prominent use of generative AI involves the digital resurrection of iconic figures whose commercial, cultural and symbolic value often intensifies after death. These include:

  • Whitney Houston – resurrected to perform both her own songs and those of others, circulating online as a malleable relic of the past.

  • Queen Elizabeth II – brought back as a rap sis from the hood to perform with a swagger drawn from Black urban culture. This transformation illustrates how nationally significant figures, once held at an ivory-tower distance, become a form of public property after death.

These algorithmic afterlives reduce the dead to entertainment assets, summoned on command, stripped of context, and remade according to contemporary whims. But AI resurrection also moves along a darker register.

  • A woman who was raped and murdered in Tanzania has reappeared in AI-generated videos, where she is made to warn others not to travel alone, transforming her death into a cautionary message.

  • A woman is summoned through AI to relive the most tragic day of her life, digitally reanimated to tell the story of how her husband killed her, embedding a warning about domestic violence.

Here, AI ghosts function as admonitions – reminders of injustice, war and unresolved collective wounds. In this process, grief becomes content and trauma a teaching device. AI does not merely revive the deceased. It rewrites and redistributes them according to the needs of the living.

While such interventions may initially astonish, their ethical weight lies in the asymmetry they expose – where those unable to refuse are summoned to serve purposes to which they never consented. And it is always marked by a triangle of sadness: the tragedy itself, its resurrection and the forceful reliving of the tragedy.

The melancholy

We suggest thinking in terms of two distinct registers of melancholy to locate where our unease resides and to show how readily that feeling can disarm us.

The first register concerns the melancholy attached to the dead. In this mode, resurrected celebrities or victims are summoned back to entertain, instruct or re-enact the very traumas that marked their deaths. The fascination of seeing them perform on demand dulls our capacity to register the exploitation involved, and the unease, cringe, and sadness embedded in these performances.

The second register is the melancholy attached to us, the living revivalists. Here, the unease emerges not from exploitation but from confrontation. In gazing at these digital spectres, we are reminded of the inevitability of death, even as life appears extended on our screens. However sophisticated these systems may be, they cannot re-present the fullness of a person. Instead, they quietly re-inscribe the gap between the living and the deceased.




Read more:
Can you really talk to the dead using AI? We tried out ‘deathbots’ so you don’t have to


Death is inevitable. AI resurrections will not spare us from mourning; instead, they deepen our encounter with the inescapable reality of a world shaped by those who are no longer here.

Even more troubling is the spectacular power of technology itself. As with every new medium, the enchantment of technological “performance” captivates us, diverting attention from harder structural questions about data, labour, ownership and profit, and about who is brought back, how and for whose benefit.

Unease, not empathy

The closer a resurrection gets to looking and sounding human, the more clearly we notice what is missing. This effect is captured by the concept known as the uncanny valley, first introduced by Japanese roboticist Masahiro Mori in 1970. It describes how nearly-but-not-quite-human figures tend to evoke unease rather than empathy in viewers.

This is not solely a matter of technical defects in resurrections, imperfections may be reduced with better models and higher-resolution data. What remains is a deeper threshold, an anthropological constant that separates the living from the dead. It is the same boundary that cultures and spiritual traditions have grappled with for millennia. Technology, in its boldness, tries again. And like its predecessors, it fails.

The melancholy of AI lies precisely here: in its ambition to collapse the distance between presence and absence, and in its inability to do so.

The dead don’t return. They only shimmer through our machines, appearing briefly as flickers that register our longing, and just as clearly, the limits of what technology can’t repair.

by : Tom Divon, Researcher , Hebrew University of Jerusalem

Source link

Capital Media

Read Previous

Wolfe Research expects higher U.S. tax refunds and ‘material tailwinds’