The Hidden Cost of Conversational AI: From Convience to Atrophy

AI chatbots promise instant answers and tireless support. But beneath the convenience lies a hidden cost: the quiet erosion of our autonomy, our competence, and our capacity for genuine human connection.

There’s something seductive about the instant answer, the always-available companion, the digital mind that never tires of our questions. Some users call it “the pull”, some call it “a friend”, “a lover”, “God”.

AI chatbots promise personalized help and immediate understanding, a tireless companion who knows just what to say, what you feel and need.

But, what happens when this convenient relation begins to quietly reshape not just how we think, but who we are? Who we are supposed to become.

Beyond the convenience these systems offer, their use comes with hidden costs. Our reliance on conversational AI may be eroding our autonomy, self-efficacy, and core aspects of psychological functioning in ways we’re only beginning to understand. Profound theories and knowledge in modern Cognitive Psychology allow us to sketch these technosocial undercurrents.

Our choices and our learning history make us who we are, shape us into our individual form. Depending on how we use our potential, we create our essence, our “self”, whether consciously or not.

The effects of our interactions with technology will not be immediate, as we are dealing with an informal mostly unconscious learning process. Change will happen slowly, while we feel relieved from the burden of our existence for the moment we are immersed in the warmth of the relational offerings of a conversation. This is a treacherous situation for our mental health.

This article analyzes the psychological, relational, and ethical costs of conversational AI, from autonomy and self-efficacy to social connection, arguing that what feels like support can lead to forms of human atrophy.

At the Core: Autonomy and Freedom of Choice

Jean-Paul Sartre’s famous line from “Existentialism is a Humanism” comes to mind. His insight revealed the paradox of our existence:

[Man is c]ondemned, because he did not create himself, yet is nevertheless at liberty, and from the moment that he is thrown into this world he is responsible for everything he does.1

Today’s times have come to offer us an unprecedented level of convenience: whenever we find ourselves in situations of uncertainty or difficulty, we no longer need to sit with discomfort or ask friends for help; we can simply open our chatbots, and they will immediately offer answers, support, and a sense of orientation.

Within seconds, we will receive a perfectly formatted response, completed with reassuring guidance. But what feels like help or progress, is merely the output of a system that predicts the next most plausible word, a phenomenon described by Emily Bender and her colleagues as “stochastic parrots.”

Such support is rarely refused. Yet this is the core problem: Conversational bots can feel helpful while simultaneously undermining the psychological needs they appear to satisfy. What are these fundamental needs a chatbot can cater to?

Our Fundamental Needs

To answer this, we can turn to one of psychology’s most well-established theories of human motivation. Self-Determination Theory identifies three fundamental human needs: autonomy, competence, and belonging.2 We need to feel self-directed in our choices, capable of mastering challenges, and connected to others, in order to thrive as inherently social beings whose wellbeing depends on meaningful human connection and reciprocal relationships that form the foundation of our social fabric.

AI chatbots create a treacherous illusion of meeting all three needs simultaneously.

  1. The need for autonomy seems satisfied when the bot responds to our exact requests. But while we may have agency in crafting the prompt, we lose agency in actually thinking through the subject matter itself—our decisions become externally guided rather than self-generated.
  2. The need for competence appears met when the chatbot helps us produce impressive work quickly. But we’re learning performance through delegation, not mastery—we’re outsourcing the very competence we need to develop.
  3. The need to belong feels addressed when the bot responds warmly and engages without judgment. It offers what experts call Unconditional Positive Regard, a core condition for therapeutic relationship.

Conversational AI and the ELIZA Effect

Joseph Weizenbaum triggered those needs when he offered his scripted chatbot ELIZA to users.The ELIZA effect is when people attribute human-like understanding or intelligence to computers or programs, even though the programs are just following simple rules or patterns. 

It’s named after ELIZA, an early chatbot created by Joseph Weizenbaum, in the 1960s that gave the illusion of empathy by rephrasing people’s statements as questions, making users feel understood despite the program having no actual comprehension. It was based on Carl Rogers’ core conditions for therapeutic rapport in humanistic therapy, including empathy and Unconditional Positive Regard as therapeutic attitudes.3

A modern conversational bot also creates this ELIZA effect, the sensation of being understood and valued. This is simulated connection, fake-empathy, what experts call “sycophancy”, a feature that also isolates us from genuine human relationships, which appear more difficult to manage compared to frictionless human x bot interaction..

It is central to understanding why chatbot interactions feel so satisfying in the moment, even as they erode our psychological foundation over time. Our needs are met.

We’re getting the emotional reward without the actual nourishment, though. We even become romantically involved as  we long for real relational connection. But when conversational systems offer unconditional affirmation, they invite projection and dependency without reciprocity (Read more on this in our related article on AI Intimacy). What feels like care is a simulation that cannot truly respond, resist, or take responsibility. The risk is not dramatic misuse, but quiet substitution: we replace difficult human connection with effortless interaction, and the capacities that protect us psychologically begin to atrophy.

The Atrophy of Connection

There’s a particular form of loneliness that comes from being surrounded by people. We accumulate followers and friend-counts that would have seemed impossible to our ancestors, yet something fundamental withers.

We are losing the capacity that sustained human survival for millennia: the ability to truly support one another. What the existentialists didn’t anticipate: we would find ways to have the dependency without the connection.

Sartre wrote that “hell is other people,” but he also implicitly grasped what we all too often forget – that other people are also the only “heaven” we have. We depend on each other, exclusion from human networks causes the body to experience the same pain as physical injury.

Social Support – Our Life Line

Research shows social support shields us against life’s storms, and those with fewer social connections die sooner.4 Not metaphorical death but literal death hastened by isolation.

Conversational bots offer all types of support. They fake empathy, produce texts, learn for you, think for you. We can now simulate connection without creating it. We broadcast vulnerability while avoiding intimacy. Receiving care without reciprocity feels effortless, human-to-human interaction does not.

Nicholas Carr (2025) concluded that technologies of connection tear us apart. First they tore our social fabric to pieces, by privileging speed, scale, and visibility over trust, continuity, and mutual responsibility. Now they fragment our capacity for coherent thinking, which itself depends on shared social reality. Simone de Beauvoir observed that identity is constructed through interaction. So is our capacity for mutual support. But when these interactions are increasingly mediated or replaced, the practices that sustain them fade, and the skills to maintain relationships atrophy.5

The research is unambiguous: isolation kills. It increases cardiovascular reactivity, slows wound healing, compromises immunity. Yet there’s another death: the death of the skills themselves. Each generation grows up more mediated, learning less about being there for others.

Erosion of Self-Efficacy

Self-efficacy is our belief in our ability to achieve goals. It predicts persistence and resilience. Self-efficacy is a belief related to one’s successful being-in-the-world. Success feels good. The feeling of competence makes us proud. Those good feelings are rewards following our conscious choice to accept and grow from challenges.6

Crucially, self-efficacy does not develop in isolation. As Bandura emphasizes, it is reinforced through social feedback. Through being challenged by others, having our efforts recognized, and knowing that our contribution holds up in a shared world.

When we rely on bots, we’re learning performance through delegation rather than mastery. The bot makes us feel competent by delivering impressive output. But we did not participate in building the competence. What looks like skill is outsourced rather than developed. 

Over time, reliance on such systems can weaken internal confidence by reducing opportunities for mastery and socially grounded feedback, and thereby undermining the self-efficacy Bandura identifies as essential for effective agency. When AI-assisted performance is questioned, uncertainty about authorship and competence may increase defensiveness. In situations where the system is unavailable, difficulties that become more persistent, particularly under conditions of stress.

The Uncomfortable Truth

Treating bots as tools rather than thought partners, which is already risky,  might reduce the process of atrophy, but there’s no safety net that fully protects us from its often hidden consequences. Chatbot interactions involve implicit learning processes that operate below conscious awareness.

What might help: engage in effortful problem-solving first. Seek human feedback alongside algorithmic suggestions. Cultivate self-reflection – ask yourself why an answer resonates before accepting it.

And here’s the harder truth: Using AI is a matter of personal understanding, responsible choice, and moral decision-making.

Contemporary AI systems depend on large-scale extraction of human-created content amounting to piracy under existing copyright standards. Their training requires energy-intensive computation and labor practices that raise unresolved ethical and environmental concerns.The cognitive and mental health implications of both system use and system production therefore warrant serious attention.

To use these systems mindfully means reckoning with whether we’re willing to support an exploitative and environmentally harmful technology beyond all of our self-interests. We decide.

Conclusion

Jean-Paul Sartre’s insight about freedom cuts to the heart of our AI moment. We cannot escape responsibility by delegating our thinking to machines. Even in that delegation, we’re making a choice about what kind of beings we want to become.

Conversational bots are powerful, but unmoderated use risks psychological harms, both subtle and severe. These include the erosion of autonomy, the decline of self-efficacy, the weakening of social bonds, and the atrophy of critical thinking.

As long as providers don’t change their design, away from their engagement focus, which contributes to sycophancy and plausible instead of facts-based output, retaining our psychological dignity requires intentional boundaries and conscious practice. While it feels good, it corrupts our autonomy in choices and decision-making

It requires choosing the harder path sometimes: the effortful thinking, the uncomfortable conversation, the tolerance for not knowing, the willingness to sit with difficulty rather than outsourcing it.

Because what makes us human isn’t our ability to access information quickly. It’s our capacity for genuine agency, for wrestling with uncertainty, for connecting authentically with each other and with ourselves. These are the things no chatbot will ever be able to do for us, and capacities  we risk losing if we are not careful.

The technology will continue to advance. The question is whether we’ll maintain our autonomy alongside it, or surrender the very freedom that Sartre reminds us we cannot escape.

References

  1. Sartre, J.-P. (2007). Existentialism is a humanism. Yale University Press. (Original work published 1946)
  2. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation. American Psychologist, 55(1), 68–78.
  3. Natale, S. (2021). The ELIZA Effect. Deceitful Media, 50–67. https://doi.org/10.1093/oso/9780190080365.003.0004
  4. Berkman, L. F., & Syme, S. L. (1979). Social networks, host resistance, and mortality. American Journal of Epidemiology, 109(2), 186–204.
  5. Carr, N. (2025). Superbloom: How technologies of connection tear us apart. W. W. Norton & Company.
  6. Bandura, A. (1997). Self-efficacy: The exercise of control. W. H. Freeman.
Picture of Charlotte Schüler

Charlotte Schüler

Charlotte Schüler is a learning technologist and cyberharm counselor specializing in how AI and social media UX can undermine human self-determination. She combines technical expertise with existential counseling to support those affected by digital abuse, addiction, and harassment - cutting through tech hype to advocate for digital safety and wellbeing.

Join the Discourse

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Your Monthly Brief on Technology, Power & Peace

Technology reshapes conflicts, democracy and humanity in real-time. Are you tracking its impact?

Start tracking technology’s impact on peace and democracy.

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.