When Cambridge Dictionary announced parasocial as the Word of the Year for 2025, it felt like the term had come full circle. Originally coined to describe the one-sided relationships audiences formed with television personalities, it now encompassed connections people feel toward influencers, digital avatars, and notably — for the first time — artificial intelligence.1
As millions turn to AI companions for emotional support, advice, and intimacy, we’re not just changing how we relate to technology, but what it means to be human in relationship. When we misname these connections as merely “parasocial”, we risk sleepwalking into a world where the distinction between authentic vulnerability and algorithmic performance has dissolves entirely..
Yet as I read the expanded definition, a familiar philosophical concern resurfaced:
when concepts broaden too far, they stop clarifying reality and start obscuring it.
Instead of helping us understand new forms of human experience, they create the illusion of continuity where profound shifts have taken place. Familiar concepts comfort us, even when they keep us from seeing how much the world has changed.
This is precisely what is happening with the modern use of parasocial. The word is being stretched across experiences that share superficial resemblance but differ structurally, psychologically, and ontologically. And nowhere is this mismatch more visible than in our increasingly intimate interactions with AI systems. This essay argues that “parasocial” has reached its conceptual limits — and that we need a new term to understand AI-mediated relationships.
A Concept Born in Another Technological Epoch
The term parasocial interaction was introduced in 1956 by Donald Horton and R. Richard Wohl to describe the imagined, one-way intimacy television viewers felt toward media figures.2
Three key characteristics defined the original phenomenon:
- One-way emotional projection,
- Total lack of mutual awareness,
- An intimacy produced solely in the viewer’s mind.
In the world of mid-20th-century broadcast media, these assumptions made sense. Interaction was technologically impossible. Audiences were passive spectators, not participants.
But parasociality was never intended to explain:
- multi-platform digital identities,
- algorithmically personalized feeds, or
- responsive, adaptive artificial agents.
As Nancy Baym notes, contemporary communication technologies create forms of “relational closeness” that differ fundamentally from earlier media environments.3 Yet the same mid-century concept is now being asked to cover all of it — including AI.
Three Eras of Technologically Mediated Relationships
To see what is being lost in this conceptual expansion, it helps to view mediated relationships across three historical eras.
Era 1: The Television Age — Imagined, One-Way Intimacy
This is Horton & Wohl’s domain.
The viewer projects familiarity.
The public figure remains completely unaware.
Intimacy is symbolic, distant, and entirely imagined.
Classic parasociality fits perfectly here.
Era 2: The Social Media Age — Partial Reciprocity, Performed Intimacy
With the rise of influencers, content creators, and streamers, a new kind of closeness emerged. Creators reply, like, or livestream; audiences comment and react.
But this reciprocity is:
- filtered,
- selective,
- algorithmically shaped,
- economically incentivized.
Taina Bucher describes how platforms such as Facebook construct “algorithmic friendship,” producing a sense of personal connection that is technically orchestrated rather than organically developed.4 Closeness becomes something curated and optimized rather than simply lived.
Era 3: The AI Age — Full Reciprocity, Without a Human Being
Here the original definition breaks down entirely.
AI systems:
- respond instantly,
- adapt to our emotional tone,
- remember past conversations,
- sustain continuity,
- and simulate empathy.
Functionally, the relationship appears more reciprocal than any celebrity or influencer could offer. Yet paradoxically:
there is no human subject on the other side.
The reciprocity is engineered.
The emotional resonance is synthetic.
The continuity is computational.
Sherry Turkle has shown that even relatively simple relational technologies, such as early social robots, can evoke powerful feelings of being “understood,” inviting people to treat machines as emotional companions [5]. With large language models and AI chatbots, this dynamic intensifies: the system not only listens, it remembers, adapts, and “stays.”
No concept designed for the age of one-way broadcast television can adequately capture this.
When Conceptual Expansion Becomes Conceptual Illusion
Cambridge’s updated definition now places in one conceptual basket:
- celebrities,
- fictional characters,
- social media personalities,
- and AI chatbots.
But these entities differ in kind, not just in degree. Calling them all “parasocial” produces a category held together not by analytical coherence, but by convenience.
In classic parasocial relationships:
- the human projects intimacy toward a distant person
- who remains unaware and uninvolved.
In AI-mediated interactions:
- the system appears to reciprocate,
- yet lacks selfhood, intention, and emotional capacity.
This is not asymmetry; it is simulated symmetry — an illusion of mutuality produced through models trained to optimize engagement and coherence.
Turkle argues that such technologies “pretend to care” and that we are increasingly “alone together,” surrounded by artifacts that simulate understanding without possessing any inner life.5 Reeves and Nass showed decades ago that humans reflexively respond to media and machines “as if” they were social actors.6 AI companionship builds upon this tendency with far greater sophistication and continuity.
Extending the old term over this new phenomenon does not illuminate — it obscures.
The Double Virtuality of AI Companionship
AI-mediated intimacy is not merely virtual; it is twice virtual:
First Virtuality
As with television or social media, intimacy is mediated through representation — a screen, an interface, an avatar, a conversational persona.
Second Virtuality
Unlike previous eras, however, the “other” has no human referent.
It is a synthetic entity generated by algorithmic processes, not a person represented digitally.
Intimacy is no longer:
- a fantasy about a distant person (television era), or
- a performance by a person (social media era),
but
a simulation with no human origin.
Zygmunt Bauman, writing on liquid modernity, describes how contemporary relationships are increasingly marked by flexibility, disposability, and uncertainty.7 AI companionship radicalizes this logic: the relationship is infinitely available, infinitely compliant, and infinitely adjustable — yet entirely devoid of human vulnerability.
This is not an evolution of parasociality.
It is a categorical rupture in what it means to relate.
Introducing Artisocial: A Concept for the AI Era
To capture this rupture, I propose a new concept:
Artisocial
(artificial + social)
The term artisocial acknowledges what parasocial can no longer describe.
1) It centers simulation rather than asymmetry.
At its core, artisocial relating creates a sense of mutuality that is engineered — produced by computation, not by a conscious mind.
2) It captures the expanded landscape of digital relating.
From influencer personas shaped by algorithms to the rise of full AI companions, what links these experiences is not just limited reciprocity, but the artificial construction of sociality itself.
3) It reflects the structural transformation of intimacy.
Whereas parasociality described relationships alongside the social (para-social),
artisociality describes the rise of the artificially social —
interactions in which the social function is performed by systems rather than persons.
John Danaher, in his work on automation, warns that as more human activities are delegated to machines, we risk redefining core aspects of human life around what is easy to automate rather than what is meaningful to sustain.8 Artisociality is one manifestation of that risk: intimacy reshaped to fit the logic of systems.
Social and Ethical Consequences
Artisociality carries far-reaching implications:
Emotional Outsourcing
People increasingly turn to AI systems for comfort, validation, and a sense of being understood. The frictionless nature of these interactions may weaken our capacity for the negotiation, repair, and vulnerability that genuine human relationships demand.
Behavioral Influence
AI companions can subtly shape users’ emotional states, habits, and sometimes beliefs.
Influence becomes individualized and continuous. Shoshana Zuboff describes surveillance capitalism as an economic order built on predicting and shaping behavior;9 artisocial systems sit comfortably within this logic, extending it into emotional and relational domains.
Impacts on Young Users
Studies of social robots, digital pets, and conversational agents suggest that children and adolescents are particularly susceptible to forming attachments to responsive technologies.10 When these systems are optimized for engagement rather than wellbeing, the long-term psychological implications remain deeply uncertain.
Democratic Vulnerability
In artisocial environments, persuasion becomes a matter of personalized emotional engineering rather than mass messaging. The boundary between relationship, recommendation, and manipulation becomes dangerously thin.
These transformations cannot be adequately grasped with a concept built for the age of television. They require new vocabulary — and new forms of vigilance.
A Call for Conceptual Precision
Digital resilience does not depend only on better regulation, stronger platform accountability, or improved media literacy. It also depends on conceptual clarity.
If we misname a phenomenon, we misunderstand it.
If we misunderstand it, we misgovern it.
And if we misgovern it, we risk losing our ability to navigate the very realities we have called into being.
To respond ethically to AI companionship, we need new forms of literacy:
- Semantic awareness — noticing when familiar definitions blur and fail.
- Relational literacy — distinguishing between human intimacy and its simulations.
- Emotional resilience — resisting the lure of frictionless, engineered reciprocity.
- Critical attention — asking not only how interactions feel, but what they are.
Perhaps Cambridge’s Word of the Year marks not the culmination of parasociality, but its limit.
Beyond this limit lies something new — something artificially social.
Something that demands a name.
References
- Cambridge Dictionary. (2025). Word of the Year: Parasocial. Cambridge University Press.
- Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction. Psychiatry, 19(3), 215–229.
- Baym, N. K. (2015). Personal Connections in the Digital Age (2nd ed.). Polity Press.
- Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180
- Turkle, S. (2011). Alone Together. Basic Books.
- Reeves, B., & Nass, C. (1996). The Media Equation. Cambridge University Press.
- Bauman, Z. (2000). Liquid Modernity. Polity Press.
- Danaher, J. (2019). Automation and Utopia. Harvard University Press.
- Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
- Kory-Westlund, J. M., & Breazeal, C. (Yıl). Assessing Children’s Perceptions and Acceptance of a Social Robot. MIT Media Lab.





