An older Korean man named Mr. Lee, dressed in a blazer and slacks, clutches the arms of his chair and leans toward his wife. “Sweetheart, it’s me,” he says. “It’s been a long time.”
“I never expected this would happen to me,” she replies through tears. “I’m so happy right now.”
Mr. Lee is dead. His widow is speaking to an A.I.-powered likeness of him projected onto a wall.
“Please, never forget that I’m always with you,” the projection says. “Stay healthy until we meet again.”
This conversation was filmed as part of a promotional campaign for Re;memory, an artificial intelligence tool created by the Korean start-up DeepBrain AI, which offers professional-grade studio and green-screen recording (as well as relatively inexpensive ways of self-recording) to create lifelike representations of the dead.
It’s part of a growing market of A.I. products that promise users an experience that closely approximates the impossible: communicating and even “reuniting” with the deceased. Some of the representations — like those offered by HereAfter AI and StoryFile, which also frames its services as being of historical value — can be programmed with the person’s memories and voices to produce realistic holograms or chatbots with which family members or others can converse.
The desire to bridge life and death is innately human. For millenniums, religion and mysticism have offered pathways for this — blurring the lines of logic in favor of the belief in eternal life.
But technology has its own, relatively recent, history of attempting to link the living and the dead.
A little over a century ago, Thomas Edison announced that he had been trying to invent an “apparatus” that would permit “personalities which have left this earth to communicate with us.” Known for his contributions to the telegraph, the incandescent lightbulb and the motion picture, Edison told The American Magazine that this device would function not by any “occult” or “weird means” but instead by “scientific methods.”
As science and technology have evolved, so too have the ways in which they attempt to transcend death. Where the 19th and early 20th centuries saw the rise of Spiritualism and pseudoscientific attempts at communing with the dead — through séances, ghost sightings and Edison’s theoretical “spirit phone” — with the invention of these A.I. avatars today, we’re now entering a new age of techno-spiritualism.
Machines already mediate much of our lives and dictate many of our decisions. Algorithms serve us news and music. Targeted ads predict our desires. Sleep-tracking apps and smartwatches gamify our physical fitness. But until recently, grief and death remained among the few aspects of modern life not totally subsumed by the steady societal drumbeat of optimization, efficiency and productivity.
As the so-called death-tech industry takes off and A.I. becomes more ubiquitous, however, grief may not exist beyond the fray for long.
A.I. used for psychological well-being is already relatively mainstream. These tend to come in the form of mental health chatbots or “companions,” like Replika, which some people use to create avatars on which they rely for emotional support. This latest wave of technology, however, has grief and loss specifically in its cross hairs.
Many of the companies producing A.I. avatars and chatbots have adopted the language of optimization, suggesting that their tools can help people “ease grief” or otherwise better process loss by providing a chance for postmortem conversations and closure. Such claims play into the faulty but mainstream notion that grief moves linearly or in discrete stages through which one can predictably and cleanly progress.
Prominently displayed on Re;memory’s website is a quote they ascribe to Confucius: “If you do not grieve over a significant loss, what else could evoke your sorrow?” The implication seems to be that only by bringing back a dead loved one via its technology might one be able to properly grieve.
The potential risks of A.I. tools for grieving are significant, not least because the companies producing them are driven by profit — incentivized to exploit desires and delusions that may be unhealthy for their users. A recent study from the University of Cambridge, for instance, evaluated the ethics of “the digital afterlife industry” and posited that these businesses may soon realize there’s even more money to be made by requiring people to pay subscription fees or watch advertisements in order to continue interacting with their dead loved ones’ avatars, especially after hooking them on the ability to converse. They might also have the deadbot make sponsored suggestions — like ordering a dead loved one’s favorite food via a specific delivery service.
Another possible dystopian scenario the Cambridge researchers imagined is a company failing (or refusing) to deactivate its “deadbots,” which could lead to survivors receiving “unsolicited notifications, reminders and updates” and instilling the feeling that they’re “being stalked by the dead.”
This mixing of reality, fantasy and enterprise is a detriment to grieving.
If the Victorian séance provided the temporary illusion of otherworldly communion, today’s A.I.-driven afterlife offers something even more insidious: an ongoing, interactive discussion with the dead that prevents or delays a genuine reckoning with loss.
In certain contexts, chatbots and avatars could be useful tools for processing a death — particularly if they’re treated as spaces of reflection, like diaries. But in our efficiency-obsessed culture that encourages us to skip over the unpleasant, painful and messy aspects of life just because we think we can, healthy use of these tools is only possible if accompanied by a firm understanding that the bots or holograms are fundamentally not real. The uncanny verisimilitude of many of these avatars complicates that and makes it more likely that their ultimate result will not be helping people process grief, but rather allowing them a means of avoiding it.
The more we use these tools for avoidance, the greater their potential for harm — disconnecting us from our own pain and from the communal mourning to which our society should be striving. If we ever come to see the use of these tools as a necessary part of grieving, we are, to put it simply, hosed.
How popular these A.I. tools for grieving will become isn’t immediately clear, but the sheer number of businesses competing to create and market them — led largely by industry in the United States, China and South Korea — it’s fair to assume they will become a significant part of our shared future.
What would it mean instead to stop and embrace the most sharply dispiriting feelings that surround loss? What would it mean to consider that, while efficiency and optimization may be useful in the marketplace, they have no place in matters of the heart?
As we enter a new era of techno-spiritualism, the question will not be when optimization culture will come for grief, but rather how we will choose to grapple with it when it inevitably does.
From the spirit phone to the deadbot, there are and always will be attempts to technologically connect with the deceased. Most worrisome is that the A.I. possibilities we have today represent only the tip of a massive iceberg. The near future will provide ever more realistic and seductive ways of ignoring or wholly creating our own realities — and isolating us ever further in our grief.
As individuals, we may not be able to control technology’s progressions. What we can control is how we face that which is unpleasant and painful, embracing those feelings, even and especially at their hardest.
Cody Delistraty is the author of “The Grief Cure: Looking for the End of Loss.” He is currently working on a book about a group of Spiritualists.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].
Follow The New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
The post Optimization Culture Comes for Grief appeared first on New York Times.