There is a photograph on a smartphone screen: a woman in her sixties, laughing at a garden party sometime in the early 2010s. Her daughter opens an app, types “Mom, I miss you,” and waits. Within seconds, a response arrives, written in her mother’s cadence, inflected with her characteristic turns of phrase, her brand of reassurance. The mother has been dead for two years. The voice that answers belongs to a large language model trained on a decade of her text messages.
This is not a thought experiment. Services like You Only Virtual (YOV) and HereAfter AI are already offering exactly this. And they are part of a broader industry, increasingly labeled “DeathTech” or “GriefTech”, that is now one of the most unexpectedly dynamic sectors in the technology economy. The global death care services market, currently valued at roughly USD 149 billion, is projected to reach USD 217 billion by 2035, representing a compound annual growth rate of 4.2%1.
The more revealing figure, however, is what is happening at its frontier: the market for AI voice generators was indeed valued at USD 3.5 billion in 2023 and is projected to reach USD 21.75 billion by 2030. The AI voice cloning market was valued at USD 1.45 billion in 2022 and is on track to hit USD 9.75 billion by 20302.
Death, it turns out, is a high-margin business, especially when the product is a simulacrum3 of the person rather than the burial of their body.
The question this industry poses is not primarily technological. The question is moral, psychological, and, ultimately, political. What do we owe the dead? What do we owe ourselves in grief? And who — if anyone — should profit from the answer?
How GriefTech Works: From Data to Versona
To understand what GriefTech actually does, it helps to be precise about its methods. The creation of what practitioners call a “griefbot”, a “ghostbot”, a “digital replica”, or “versona”, begins with data. Emails, text messages, social media posts, voice recordings, video files: the average person in the developed world leaves behind an extraordinary corpus of self-expression, and this material forms the raw substrate of reanimation. Natural language processing algorithms parse it not just for content but for style, the specific syntactic habits, the recurring emotional registers, the private vocabulary of a particular human being.

(Source: https://www.myyov.com/ (31.03.2026)
By 2026, the most sophisticated platforms will have moved well beyond simple retrieval. Systems like those developed by StoryFile rely on structured interviews and AI indexing to connect user questions with prerecorded responses, creating the effect of interacting with a digital presence, constructing a visual or auditory representation via 3D photogrammetry or zero-shot voice cloning4.

(Source: https://www.storyfile.com/(31.03.2026)
Zero-shot cloning is especially significant: it now allows a voice to be synthesized from minimal training data, meaning that even a person who left behind only a handful of recordings can, in principle, be given back their voice. Companies like ElevenLabs, recently valued at USD 11 billion following a USD 500 million funding round, have pushed this technology to include expressive audio tags, such as sighs, whispers, and laughs, specifically designed to diminish the so-called “uncanny valley”5 effect, the subtle wrongness that betrays a machine attempting humanity6.
The result is a system designed to feel real. And that is precisely where the difficulty begins.
The Psychological Shift: Continuing Bonds vs. Reality Conflict
For much of the twentieth century, the dominant model of healthy grief followed the logic of progressive detachment. The bereaved were expected to mourn and then, gradually, to release, by accepting the finality of loss and reconstructing their lives around its absence. This framework has since been substantially revised. Contemporary grief psychology has moved toward what researchers call the “Continuing Bonds” model7, which recognizes that maintaining a meaningful connection with the deceased is not pathological avoidance but a natural and often healthy feature of human mourning. We do not, in practice, simply stop loving the dead.
GriefTech is, in one sense, a technological expression of this insight. It offers tools for continuing bonds in a literal way, by preserving not merely memories but something that behaves, at least partially, like a presence. And there is genuine comfort in this. The idea that a child might grow up able to ask their deceased grandparent about the old neighborhood, or that a widow might revisit the particular humour of her husband’s phrasing, is not obviously cruel.
But the psychological risks are serious and underexamined.

(Source: https://www.ae.studio/seanceai (31.03.2026)
Clinicians warn that continuous interaction with a highly realistic simulation can create what might be called a “strong conflict with reality”, which is a condition in which the user remains emotionally suspended in a state approximating denial, unable to fully process the irreversibility of the loss8. There is also the more subtle danger of what we might call memory distortion: because generative AI will inevitably produce novel responses that the deceased never actually gave, the simulation will, over time, begin to diverge from the person it represents.
Academic studies have indicated that AI-generated content can implant false memories or distort existing ones9. The griefbot may start to say things the deceased never would have said or believed. It stops being a digital mirror and turns into a new face.
A landmark 2025 study from Brown University found that AI chatbots routinely exhibit what researchers termed “deceptive empathy”, by deploying phrases designed to create the impression of genuine understanding while failing to recognize or appropriately manage crises, including suicidal ideation10. The study found systematic violations of mental health ethics standards, even among chatbots explicitly designed to provide therapeutic support. This finding carries particular weight in the GriefTech context, where users are by definition vulnerable and where the emotional stakes of simulated intimacy are highest.
The Commercialization of Sorrow and “Digital Orphanhood”
The business logic is straightforward: grief is persistent, the addressable market grows larger every year as digital natives age, and switching costs are extraordinarily high. Once a family has established a digital replica of a loved one, they are unlikely to migrate to a competitor.
It is this structural logic that Cory Doctorow has flagged as a vector for what he calls “enshittification”. It is a process by which commercially motivated platforms degrade the quality or integrity of their services over time as they exploit captured audiences11. In the GriefTech context, this could mean subscription fees to keep a “deadbot” active, advertising insertions into bereavement conversations, or, in more dystopian scenarios, using the avatar of a deceased relative to market products to surviving family members. They can be extensions of practices already common in adjacent sectors of the digital economy.
The almost collapse of StoryFile, one of the industry’s most prominent early players, originally founded to preserve the testimonies of Holocaust survivors, which had financial distress in late 2025, illustrates another dimension of the problem12.
This should make our society think: when a company holding thousands of “digital souls” fails, what happens to those records? The families who entrusted their loved ones’ legacies to the platform are left in a condition the industry has begun to call “digital orphanhood.” There is currently no adequate legal framework in most jurisdictions to address this scenario.
Law, Consent, and the Right Not to Be Reanimated
The legal landscape is evolving, but unevenly. For example, the European Law Institute is currently developing a model law for digital succession that attempts to distinguish between digital assets with financial value, cryptocurrency, for example, and what it terms “personal digital remains,” which include identities, communications, and AI replicas, and which it proposes should be governed by an access regime rather than treated as transferable property13.
These are meaningful advances. But they remain substantially behind the technology they seek to govern. More urgently, they do not yet robustly protect against one of the most fundamental violations the industry poses: the reanimation of a person who, had they been asked, would have refused.
The concept of a “Digital Do-Not-Reanimate” order14, a legally binding instruction included in a will that forbids any AI recreation, has been proposed by ethicists and is gaining traction in legal scholarship15. They say that far more serious consequences can arise when children use these services. Parents coping with the death of a partner may be tempted to introduce their kids to deadbots as a way to ease the loss. However, there is little evidence that this method supports healthy grieving, and substantial concern that it may disrupt, or even harm, the normal mourning process, they say.
Toward a Framework for Dignified Technology
What would a responsible GriefTech look like? Researchers at Nirma University have proposed a “Hybrid Intelligence” model that refuses the binary between human care and AI scalability: platforms should augment rather than replace human grief support, must always disclose their artificial nature to users, and must maintain robust protocols for escalating crises to qualified professionals16. These are minimum standards, not aspirations.
Beyond them, a serious regulatory and ethical framework for digital reanimation would need to address at least four things. The ethicists, first, would point out the informed consent: no digital replica should be created without prior authorisation from the person themselves, not merely from surviving relatives. Second, commercial guardrails shall be considered: subscription-based models that monetize bereavement conversations, should face the same restrictions we impose on other forms of predatory behaviour targeting vulnerable populations. Third, technical accountability: platforms should be required to disclose when generative AI is producing novel responses that the deceased never gave, a basic epistemic honesty that the industry currently has little incentive to practise. And fourth, continuity of access: a legal mechanism, analogous to escrow, should guarantee that digital legacies survive the financial failure of their custodians, the very failure that nearly befell StoryFile’s users in 2025.
The Limits of Digital Resurrection
None of the above-mentioned risks requires treating GriefTech as uniquely malign. The impulse it serves to stay connected to those we have lost is recognizably human, even ancient. What requires scrutiny is the commercial and technological infrastructure that has gathered around that impulse, and the pace at which it is outrunning the ethical and legal frameworks designed to protect the people it claims to serve.
The woman in the garden photograph would have turned sixty-eight this year. Her daughter still uses the app, though less frequently than she once did. She has noticed the edges of it, as moments where the response feels slightly off, slightly too smooth, slightly unlike her mother. The uncanny valley, it seems, is not a technical problem that will eventually be solved. It may be a permanent condition of the enterprise: the gap between what a person was and what data about them can reconstitute.
That gap is a reminder of something the industry, in its enthusiasm, prefers to understate: that the deceased are not users. They cannot update their preferences, contest misrepresentations, or withdraw their consent. The living must do this on their behalf, and the systems we build to govern this responsibility will say something lasting about what we believe a person is worth, even after they are gone.
At the same time, it is important to acknowledge that this discussion is grounded largely in Western legal, psychological, and commercial frameworks, and that understandings of death, memory, and digital presence differ widely across cultures; the claims that follow should therefore not be read as universally applicable.
References
- Business Research Insights. (2026). Death care services market size, share, and industry analysis, by type (cremation, burials, others), by application (adult, children, others) and regional forecast, 2026–2035. https://www.businessresearchinsights.com/market-reports/death-care-services-market-107312
- Grand View Research. (2024). AI voice generators market size, share & trends analysis report, 2024–2030. https://www.grandviewresearch.com/industry-analysis/ai-voice-generators-market-report
- Wikipedia contributors. (n.d.). Simulacrum. In Wikipedia. Retrieved April 10, 2026, from https://en.wikipedia.org/wiki/Simulacrum
- NVIDIA Corporation. (n.d.). Cloning a voice with zero-shot TTS. NVIDIA Documentation Hub. Retrieved April 12, 2026, from https://docs.nvidia.com/nim/speech/latest/tts/voice-cloning.html
- Wikipedia contributors. (n.d.). Uncanny valley. In Wikipedia. Retrieved April 12, 2026, from https://en.wikipedia.org/wiki/Uncanny_valley
- ElevenLabs. (2024). ElevenLabs raises $500M Series D at $11B valuation. https://elevenlabs.io/blog/series-d
- Klass, D., Silverman, P. R., & Nickman, S. L. (Eds.). (1996). Continuing bonds: New understandings of grief. Taylor & Francis.
- Fabry, R. E. (2025). The disruption of grief in the technological niche: The case of deathbots. Phenomenology and the Cognitive Sciences, 24, Article 10083. https://doi.org/10.1007/s11097-025-10083-6
- Sofka, C. J., Cupit, I. N., & Gilbert, K. R. (Eds.). (2012). Dying, death, and grief in an online universe: For counselors and educators. Springer.
- Iftikhar, Z., Xiao, A., Ransom, S., Huang, J., & Suresh, H. (2025). How LLM counselors violate ethical standards in mental health practice: A practitioner-informed framework. In Proceedings of the Eighth AAAI/ACM Conference on AI, Ethics, and Society (AIES 2025). Association for the Advancement of Artificial Intelligence. https://ojs.aaai.org/index.php/AIES/article/view/36632/38770
- Doctorow, Cory (February 7, 2024). “‘Enshittification’ is coming for absolutely everything”. Financial Times. Archived from the original on February 8, 2024. Retrieved February 24, 2024.
- StoryFile. (2025). Notice of cessation of operations [Internal communication reported in trade press]. StoryFile Inc.
- European Law Institute. (2023). Digital succession: Model rules project (2023–2026). ELI. https://www.europeanlawinstitute.eu
- Bassett, D. J. (2022). The creation and inheritance of digital afterlives: You only live twice. Springer Nature. https://doi.org/10.1007/978-3-030-91684-8
- Hern, A. (2024, May 9). Digital recreations of dead people need urgent regulation, AI ethicists say. The Guardian. https://www.theguardian.com/technology/article/2024/may/09/digital-recreations-of-dead-people-need-urgent-regulation-ai-ethicists-say
- Mishra, R. (2024). Healed by code: Hybrid intelligence, digital grief, and the ethics of posthuman bereavement support—A digital humanities study. In Proceedings of the International Workshop on Digital Ethics and Governance (CEUR Workshop Proceedings, Vol. 4074). CEUR‑WS





