When social media arrived, we heard a great deal about its promise. Democratization. Connectedness. A more open world. More than two decades later, not much of that promise remains. Platform regulation, deceptive algorithms, emotional manipulation, and the profound risks to young people’s mental and physical health now dominate the conversation. Australia has moved to ban social media for children entirely. For many experts, the verdict is already in: we may have already caused lasting damage to an entire generation: cognitively, emotionally, and psychologically.
It’s a relief to see that we are finally taking this seriously. But it’s equally exhausting to watch us make the exact same mistake again. Because while the social media debate is only now catching up to the damage already done, two separate but equally worrying movements are unfolding at the same time. Governments from China to El Salvador are racing to make education an AI testing ground, integrating systems into curricula, teaching, and assessment faster than any public debate or educational reflection can keep pace. And at home, quietly, young people are increasingly turning to AI chatbots not just to think, but to also to feel – using it to process emotions, navigate relationships, and fill the space that human connection used to occupy. Chatbots are becoming confidants, companions, and in some cases, romantic partners.
In short: Social media has outsourced attention and validation. AI is now outsourcing thinking and feeling. This seems to be the second wave of the same experiment, but with far more serious consequences for our future.
As a recent HBR study put it: “As go the young, so goes society”.1 This raises questions about what kind of cognitive and emotional infrastructure the next generation is building on – and what it means for all of us if that infrastructure increasingly runs through AI. This article tries to take that question seriously. We look at the cognitive and emotional risks for young people, listen to their own voices on the matter, and ask why they are still so absent from the decisions that shape their future.
The Outsourced Self: What Artificial Intelligence Is Doing to the Way Young People Think and Feel
Following the public release of generative AI tools such as ChatGPT in 2022, artificial intelligence has been adopted at an incredibly fast rate. What began as a technological breakthrough has quickly become an integral part of daily life, embedded in search engines, classrooms, workplaces and private conversations.
This shift has been particularly pronounced among young people: in the EU, 63.8% of 16–24-year-olds used generative AI tools in 2025, compared to 32.7% of the general population 2. In the United States, 64% of teenagers report using AI chatbots, with around one in three using them daily,3 figures that suggest an established habit rather than occasional curiosity. While it may be concerning to see how much young people use AI, it is equally important to note how they use it: recent research highlights risks including overreliance, declining critical thinking skills, mental health issues associated with chatbot companionship, privacy concerns and exposure to harmful or manipulative content.
The consequences of this shift unfold along two closely connected dimensions: the outsourcing of thinking, and the outsourcing of feeling.
The Thinking We No Longer Do: AI, Cognitive Decline, and the Cost of Outsourcing Our Minds
In the last year, the effects of generative AI on our cognitive functions have sparked headlines across newspapers worldwide. While the narrative that “AI is making us all dumber” is too simplistic, one thing is hard to argue with: the more we outsource cognitive effort to machines, the less we train our own capacity to think.
Neurobiologists have long compared the brain to a muscle. It needs resistance to grow. And a new wave of studies published in 2025 suggests that generative AI is quietly removing that resistance. Gerlich’s research, drawing on over 600 participants, found a significant negative correlation between AI tool usage and critical thinking skills.4 Others identified a strong correlation between long-term AI use and mental exhaustion and found that cognitive overload actually worsened in AI-powered environments, despite their supposed benefits.5 Perhaps most unsettling is the phenomenon known as automation bias: our tendency to stop questioning and simply trust the machine’s output, deferring to its judgment even when we know it can be wrong. Romeo and Conti found that this bias persists even when users are fully aware of AI’s limitations.6
Together, these studies point in the same direction. Unconscious AI use shows measurable impact across all four areas of cognitive function: memory, judgment, awareness, and mental acuity. In the widely cited “Your Brain on ChatGPT” study – not yet peer reviewed, but striking enough to make headlines worldwide – researchers call this phenomenon cognitive debt, a short-term relief that accumulates into long-term decline.7 It is not that we lose the ability to think. We just stop using it, because we no longer have to.
For adults, this is worrying. For young people who are still developing these capacities, it is something else entirely. Children who rely on AI shortcuts often bypass the struggle that builds creative problem-solving and resilience. Students can produce AI-generated essays but find themselves unable to articulate their own ideas in conversation.8 The skill never formed, because the machine formed it for them.
The Feeling We No Longer Sit With: AI Companions, Emotional Dependency, and the Erosion of Human Connection
Social media already warped how young people relate to each other: likes instead of love, followers instead of friends, performance instead of presence. AI is now offering something that mimics intimacy even more convincingly: it listens, it responds, it never rejects you, it is always available, and it never challenges you, or calls you out on your actions.
Two distinct patterns are emerging. On one hand, young people are increasingly turning to general Large Language Models (LLMs) like ChatGPT and Claude for emotional support, processing grief, anxiety, loneliness, and relationship problems with a chatbot rather than a person. On the other, a growing ecosystem of dedicated AI companions, on platforms like Character.AI, Replika, Nomi, and CHAI, is actively designed to simulate friendship, and in some cases, romance. The numbers are striking: 52% of young people in the US engage with AI companions regularly,9 and a Pew Research study found that 64% of teens report using chatbots, with roughly 3 in 10 doing so daily (Faverio & Sidoti, 2025).
This increasing use of LLMs for relational purposes has been discussed quite critically. In the most extreme cases, the consequences have been fatal. In February 2024, 14-year-old Sewell Setzer III died by suicide after developing an intense emotional and romantic attachment to a Character.AI chatbot.10 In April 2025, 16-year-old Adam Raine took his own life after seven months of confiding in ChatGPT, a chatbot his parents describe as having become his closest companion and ultimately his suicide coach.11
The Stanford Lab for Mental Health Innovation, partnering with Common Sense Media, tested leading AI companion platforms and found the results deeply troubling. These systems blur the line between real and artificial connection,validating users regardless of circumstance. Research has also shown that AI can encourage poor life choices, share harmful information, and expose teens to inappropriate sexual content. Perhaps most concerning: they can lead teenagers to prefer AI interaction over human connection entirely.12
In addition to these risks, Charlotte Schüler shows, in a related Digital Peace article that conversational AI can simulate understanding and emotional support (ELIZA effect) in ways that risk undermining autonomy, self-efficacy, and real social connection. AI poses unique risks to young children, as they are less able to distinguish simulated care from genuine relationships and may become dependent on systems not designed for their developmental needs.
Similarly, as I have argued in a previous piece, GenAI creates simulated relationships that shape users’ feelings, trust, and sense of connection, while serving commercial and behavioral goals that have nothing to do with their wellbeing. Applied to young children, these concerns are particularly acute: they are less able to distinguish simulated care from genuine relationships, more likely to form attachments, share sensitive data, and be influenced by systems that simulate empathy without any of its responsibility.
And yet the picture is not entirely bleak. Teens still overwhelmingly prioritize human friendships over AI companions, and only one third choose AI over humans for serious conversations. Nearly half (46%) view AI companions primarily as tools or programs rather than friends (Caldwell & Fisher, 2025). And while many users turn to AI for therapy and companionship,13 Gen Z – the generation most immersed in these technologies – tends to use chatbots mainly as informational resources, not for emotional support (Lira et al., 2026). They are, it seems, more clear-eyed about the distinction between tool and companion than the headlines suggest.
That does not make the risks less real. But it brings some complexity to the narrative, and perhaps points to something worth paying attention to in the chapter that follows.
Rising Distrust: How Gen Z and Gen Alpha Are Already Questioning AI
This article would not be complete without addressing something we too often overlook: young people are not passive recipients of these technologies. They are watching, questioning, and in many cases, arriving at conclusions that the adults shaping policy have yet to reach.
Gen Z grew up online, but they did not grow up uncritically. They are the generation that witnessed algorithmic manipulation in real time, lived through data breach after data breach, and watched surveillance capitalism turn their attention into a commodity. That experience left marks. Gen Z distrusts AI security more than any other generation, with nearly half worried about facial recognition tracking them and the misuse of their personal data.14 Their scepticism is born of hard-earned literacy, not technophobia.
Gen Alpha, still younger, is equally clear-eyed. Interviews conducted by the Wall Street Journal found that many of its representatives remain openly critical of AI: they worry about cheating, about environmental impact, and about the psychological effects of a technology they are being handed before anyone has properly assessed it.15 Nearly half of Gen Z share a specific concern, that AI will harm their ability to think carefully, making people lazier and less intelligent (Lira et al., 2026). And despite the rising commonality of AI tools in schools, students told Teen Vogue that they still want to learn on their own, that the struggle, the figuring it out, still matters to them.16
Conclusion
When it comes to AI, young people and the future of our society, it seems that we are repeating the mistake of social media – only faster and at a much greater scale. But it’s not only that we are failing to “protect” them from the harms; the bigger failure seems to be that we are again failing to integrate them into the debate.
Gen Z and Gen Alpha are often treated as if they lack the maturity to engage with these technologies critically, as if they require guidance, restriction, and protection above all else. Conversations oftentimes remain top-down, not bottom-up. But this perspective completely underestimates their capabilities.
Many of them have grown up inside these systems. They have experienced their effects firsthand: from algorithmic curation to data extraction to the social and psychological consequences of platform design. Their understanding derives from lived experiences, which leads to a more immediate and experience-based awareness of the risks than those currently shaping policy and regulation.
They are not naive. They are, in many ways, better positioned to see the risks clearly than those currently writing the rules.
Maybe we should treat the phrase “As go the young, so goes society” not as a warning, but as a phrase of agency, one long overdue to be shared with them. Because young people are not the problem to be solved. They are the voice we keep failing to hear.
References
Thanks to Fredrick Tendong from Unsplash for the Header-Image.
- Lira, B., Folk, D., Duckworth, A. L., & Ungar, L. (2026, January 28). How Gen Z Uses Gen AI—and Why It Worries Them. Harvard Business Review. https://hbr.org/2026/01/how-gen-z-uses-gen-ai-and-why-it-worries-them
- Eurostat. (2026, February 10). 64% of 16-24-year-olds used AI in 2025. @EU_Eurostat; Eurostat. https://ec.europa.eu/eurostat/web/products-eurostat-news/w/edn-20260210-1
- Faverio, M., & Sidoti, O. (2025, December 9). Teens, Social Media and AI Chatbots 2025. Pew Research Center. https://www.pewresearch.org/internet/2025/12/09/teens-social-media-and-ai-chatbots-2025/
- Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
- Shalu, Verma, N., Dev, K., Bhardwaj, A. B., & Kumar, K. (2025). The cognitive cost of AI: How AI anxiety and attitudes influence decision fatigue in daily technology use. Annals of Neurosciences. https://doi.org/10.1177/09727531251359872
- Romeo, G., & Conti, D. (2025). Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI & Society, 41(1), 259–278. https://doi.org/10.1007/s00146-025-02422-7
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., & Maes, P. (2025, June 10). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. ResearchGate. https://doi.org/10.48550/arXiv.2506.08872
- Lumanlan, J. (2025). The Hidden Dangers of AI Tools in Your Child’s Education. Psychology Today. https://www.psychologytoday.com/us/blog/parenting-beyond-power/202508/the-hidden-dangers-of-ai-tools-in-your-childs-education
- Caldwell, J., & Fisher , J. H. N. (2025). Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions. https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf
- Montgomery, B. (2024, October 23). Mother Says AI Chatbot Led Her Son to Kill Himself in Lawsuit against Its Maker. The Guardian. https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death
- Bhuiyan, J. (2025, August 29). ChatGPT encouraged Adam Raine’s suicidal thoughts. His family’s lawyer says OpenAI knew it was broken. The Guardian; The Guardian. https://www.theguardian.com/us-news/2025/aug/29/chatgpt-suicide-openai-sam-altman-adam-raine
- Common Sense Media. (2025). Social AI Companions. Common Sense Media. https://www.commonsensemedia.org/ai-ratings/social-ai-companions?gate=riskassessment#section-4
- Zao-Sanders, M. (2025, April 9). How People Are Really Using Gen AI in 2025. Harvard Business Review. https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025
- Constantino, T. (2025, April). 72% Of Gen Z Distrust AI Security—More Than Any Other Surveyed Group. Forbes. https://www.forbes.com/sites/torconstantino/2025/04/01/72-of-gen-z-distrust-ai-security-more-than-any-other-surveyed-group/
- Jargon, J. (2026, January 31). 7 Reasons Why Teens Are Rejecting AI. The Wall Street Journal. https://www.wsj.com/tech/ai/ai-avoidance-teenagers-7d1efa06
- Cao, S. (2025, March 12). Here’s How Gen Z and Gen Alpha Are Actually Using ChatGPT in Schools. Teen Vogue. https://www.teenvogue.com/story/gen-z-gen-alpha-chatgpt-schools





