Recently, a friend of mine has been feeling increasingly uneasy about their job. When they started their new project about six months ago, training an artificial intelligence, they were excited about the opportunity to explore a technology that is currently turning the world upside down. But with time, they began to realise: they were training their own replacement. Not just their own role, but the job itself.
With every day they fed the system data, refined its algorithms, and corrected its errors, that feeling of unease grew stronger. What had started as a technically fascinating task began to feel like a quiet betrayal. They were contributing to a future in which the very meaning of their contribution was uncertain – a future where human tasks are sought to be replaced by machines.
When they told me about it, I couldn’t stop thinking about the strange paradox in it: all the effort, all the care and skill invested, just so their work would eventually no longer be needed. My friend is just one of many helping to enable AI to take over our workforce. So as AI takes on human work, the question of what happens to us – the humans- becomes unavoidable.
I’m certainly not the first to ask this question. My daily LinkedIn feed is filled with intense debates about the future of work in the age of AI. The predictions range from salvation to extinction. In fact, it’s the only issue where the echoes don’t simply bounce back. Techno-optimists argue that AI will create more jobs than it eliminates, ushering in a new era of productivity. Pessimists warn of the opposite, that perhaps only ten percent of today’s jobs will survive automation.
And even predictions by renowned institutions could not be more different: The International Monetary Fund (IMF) says that nearly 40 percent of all jobs worldwide could be affected by AI-related automation, with advanced economies seeing up to 60 percent of jobs impacted.1 The World Economic Forum (WEF) 2025 Future of Jobs Report takes a more optimistic stance. It predicts that although 92 million roles could be displaced by 2030, AI and automation will create 170 million new jobs globally, resulting in a net increase of 78 million jobs worldwide.2
Yet amid all this noise about numbers and percentages, something crucial gets lost. Everyone is debating whether AI will make humans obsolete, but no one seems to be asking what that actually means for us, not as workers, not as statistics, but as people.
Beyond productivity, the use of generative AI already subtly reshapes how we think, act, and relate. It affects our cognitive capacities and even how we define ourselves as humans. For centuries, philosophers have asked what work reveals about the human condition, and what we lose when it becomes detached from meaning. But if work is essential to humanity, then what happens if it becomes increasingly outsourced to machines? As we move toward a quiet fusion of human and machine, are we truly considering its costs and consequences?
This article follows up on these questions. It explores what happens to our sense of purpose, intrinsic motivation and self-efficacy when tasks are fully automated – and when the human element of work is quietly stripped away. The answers may be more complex than either the optimists or pessimists imagine.
What the Numbers Don’t Tell Us
Before we turn to the philosophical dimension, it’s worth grounding the debate in what we actually know, and what we don’t. The debate over AI’s impact on the labor market is often framed as a binary choice between utopia and apocalypse. Economic research reflects this uncertainty, as seen in the conflicting predictions from the IMF and WEF. Recent studies from Yale suggest that despite widespread public anxiety, there has been no discernible large-scale disruption to the labor market since the advent of powerful generative AI models.3 The OECD, too, takes a comparatively optimistic stance: according to its Employment Outlook 2025, AI mainly automates routine tasks, allowing humans to focus on more complex, creative, and relational activities. Employment levels, it argues, could remain stable – or even grow – if governments invest in training and upskilling policies to help workers adapt.4
But historically, technological integration unfolds over decades, not months. Reports from Brookings5 and Stanford6 describe a temporary stability yet warn that disruption could come suddenly, once AI systems are more deeply embedded in organisational workflows.
Even among AI’s architects, concern is growing. Dario Amodei, CEO of Anthropic, predicted in 2025 that AI could eliminate up to half of all entry-level white-collar jobs within five years. Geoffrey Hinton, often called the “godfather of AI”, warned that the technology will likely increase unemployment while boosting profits, not because of AI itself, but because of capitalism’s structure.7
Labour unions have taken a more proactive stance. Across Europe and the U.S., they demand that workers be included in AI design and deployment, insisting that automation must not come at the expense of job quality, rights, or equity. Many have begun negotiating contracts that regulate AI use and mandate retraining, while warning that unrestrained automation could exacerbate inequality and social fragmentation.8
The Quiet Shifts Already Underway
But even if large-scale job losses haven’t materialized yet, the real transformation has already begun, in how we work, think, and relate. One of the most visible signs is the rise of what critics call “AI slop”: mass-produced, low-quality automated content that appears to be finished work, but upon closer inspection, lacks substance or utility. Research reveals the tangible costs. AI-generated work that fails to meaningfully advance a task creates nearly two hours of corrective labour, while also eroding trust, collaboration and employee well-being.9
Cognitively, an over-reliance on AI for creative and analytical tasks risks an atrophy of human creativity and critical thinking. We’ve analysed the impact of cognitive offloading, and increasing automation of tasks here.
From a security perspective, our increasing dependence on proprietary AI systems creates vulnerabilities, from worker surveillance to the consolidation of power in the hands of a few tech corporations. Furthermore, the environmental cost is staggering. Training a single large AI model can consume as much electricity as hundreds of homes for a year and generate hundreds of tons of carbon emissions.
Critically, we must also question the narratives driving the AI discourse. Researchers Alex Hanna and Emily M. Bender argue that the focus on speculative, existential threats, the “risk of extinction from AI”, is often a deliberate misdirection. They contend that this “AI doomer” narrative, promoted by the very companies building the technology, serves to distract regulators from the real, tangible harms AI is causing right now: algorithmic bias, worker exploitation, and the erosion of information integrity.10 By framing the debate around a far-off apocalypse, the industry can justify further automation while deflecting accountability.
What Aristotle Knew But Silicon Valley Forgot
To understand what is truly at stake, we must turn to philosophy. For Aristotle, humans are zoon politikon: political or social animals who find purpose through meaningful activity within a community.11 He believed that a good life, or eudaimonia, is achieved through the excellent expression of our uniquely human capacities for reason and virtue. The workplace has become a primary site for this kind of community and activity. What happens, then, when AI begins to dismantle this social fabric, replacing human interaction with human-computer interaction? If our work no longer provides a space for communal flourishing, we risk losing a fundamental source of human purpose.
Karl Marx offers a powerful lens through which to view this predicament. For Marx, labor was not merely a means of survival but the primary way in which humans realize their own potential and shape the world. He argued that under capitalism, this process is corrupted, leading to alienation, from the products we create, from the creative process itself, from our own human nature, and from each other.12 AI-driven automation threatens to introduce a new, more profound form of alienation. The worker may be removed from the creative and intellectual process altogether, reduced to the role of a mere overseer or, like my friend, a trainer for their own digital successor.
Hannah Arendt’s distinctions in The Human Condition provide another crucial framework. She separates human activity into three categories: labor, the cyclical, biological necessities of survival; work, the creation of a durable, artificial world; and action, the unique human capacity to begin new things and disclose ourselves to others.13 Arendt was wary of modern society’s tendency to elevate labor above all else, reducing human life to a cycle of production and consumption. AI threatens to accelerate this trend, automating the skilled “work” that builds our world and leaving humans with either the toil of “labor” or nothing at all.
This brings us to philosopher Frithjof Bergmann, whose concept of “New Work” offers a powerful counter-narrative. Bergmann’s central thesis is simple yet revolutionary: “We should not serve work; work should serve us”.14 He argued that we should pursue work that we truly, truly want to do. Bergmann’s philosophy challenges us to rethink the purpose of automation. Instead of using technology to simply do the same things faster and cheaper, we could use it to liberate ourselves from monotonous work, freeing up human time and energy for creative, communal, and self-determined activities.
These philosophical perspectives illuminate what is at stake beyond economics: the erosion of meaning, identity, and connection. If work is one of the primary ways in which we experience being human, then the quiet infiltration of machines into our creative, emotional, and communicative lives poses a deeper question. What happens when humans no longer stand at the centre of their own creation? When our communication, our words, emotions, and thoughts become increasingly influenced by machines? The real cost of convenience might be nothing less than our humanity.
The Cost of Convenience
The promise of large-scale automation is a world of unparalleled convenience, where tedious tasks are handled by intelligent machines. However, this seductive vision comes with hidden costs that extend far beyond corporate balance sheets. The relentless optimization for efficiency risks a systemic erosion of human capabilities, social cohesion, and planetary health.
The cognitive cost may be the most insidious. When we outsource thinking, problem-solving, and creativity to AI, we risk the atrophy of our own mental faculties. The loss of craftsmanship represents the degradation of skills honed through practice, the decline of mentorship, and the disappearance of the satisfaction that comes from mastering a difficult task. When the process is devalued in favor of the instant product, we lose the journey of learning and growth essential to human development.
Socially, the automation of the workplace threatens to dissolve the communal bonds forged through shared effort. For many, the workplace is a primary source of community, a place where relationships are built and knowledge is shared. As AI systems increasingly mediate our interactions, we risk becoming more isolated, interacting with interfaces rather than with each other.
Furthermore, the planetary cost is immense and often invisible. The digital cloud is not an ethereal entity; it is a vast infrastructure of data centers that consume enormous amounts of energy and water. The training and operation of large-scale AI models contribute significantly to global carbon emissions and e-waste, while the mining of rare earth minerals for their hardware often involves environmental degradation and exploitative labor practices.
Finally, the political implications are profound. The development and deployment of advanced AI are concentrated in the hands of a few powerful corporations. This centralization creates a new form of technological feudalism, where a small number of entities control the means of production, the flow of information, and the very infrastructure of our digital lives. The critical question becomes: For whose benefit is this convenience truly being created, and at what cost to our autonomy, our society, and our planet?
Conclusion – The Future We’re Building
I often think back to my friend, who continues to diligently train the algorithm that – if they succeed – will one day make their job redundant. Their unease is something I can relate to, a reflection of the deep and growing societal concern about the future we are collectively creating. It contains an epiphany: the uncomfortable realisation that our relentless pursuit of technological progress may be leading us towards a more efficient, yet less meaningful, world.
Perhaps AI will not replace us, but its current implementation reveals what we’ve already replaced: meaning with efficiency, purpose with productivity. In short: The real question when it comes to AI implementation is not about job numbers, but about the existential question of what defines us as humans when we’re no longer needed.
The challenge ahead is not to halt technological progress, but to steer it with wisdom and intention. It is time to move beyond the narrow question of job numbers and ask what kind of work gives us dignity, what kind of society fosters human flourishing, and what kind of future we truly want to build.
It’s time to rethink the future we’re building.
References
- Georgieva, K. (2024). AI will transform the global economy. let’s make sure it benefits humanity. International Monetary Fund. https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity
- World Economic Forum. (2025, January 7). The Future of Jobs Report 2025. World Economic Forum. https://www.weforum.org/publications/the-future-of-jobs-report-2025/
- Gimbel, M., Kinder, M., Kendall, J., & Lee, M. (2025). Evaluating the Impact of AI on the Labor Market: Current State of Affairs. The Budget Lab at Yale. https://budgetlab.yale.edu/research/evaluating-impact-ai-labor-market-current-state-affairs
- OECD. (2025). Die Auswirkungen von KI auf die Arbeitsmärkte: Was wir bislang wissen. OECD. https://www.oecd.org/de/publications/2021/01/the-impact-of-artificial-intelligence-on-the-labour-market_a4b9cac2.html
- Kinder, M. (2025, October). New data show no AI jobs apocalypse—for now. Brookings. https://www.brookings.edu/articles/new-data-show-no-ai-jobs-apocalypse-for-now/
- Brynjolfsson, E., Chandar, B., Chen, R., Bloom, N., Gans, J., Autor, D., Rock, D., Li, F.-F., Li, F., Langer, C., Bana, S., Cook, C., Forman, C., Wang, A., Ross, B., Maghzian, O., Halperin, B., Pei, J., Trammell, P., & Bergman, E. (2025). Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence Thanks to. https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
- Ermut, S. (2025). Top 15 Predictions from Experts on AI Job Loss in 2025. AIMultiple. https://research.aimultiple.com/ai-job-loss/
- Mbekeani, M. (2025, June 29). When Labor Meets AI: The Next Frontier In Workforce Economics. Forbes. https://www.forbes.com/sites/michellembekeani/2025/06/29/when-labor-meets-ai-the-next-frontier-in-workforce-economics/
- Castrillon, C. (2025, October 2). AI “Workslop” Could Be The Biggest Threat To Productivity. Forbes. https://www.forbes.com/sites/carolinecastrillon/2025/10/02/ai-workslop-could-be-the-biggest-threat-to-productivity/
- Bender, E. M., & Hanna, A. (2023, August 12). AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype. Scientific American. https://www.scientificamerican.com/article/we-need-to-focus-on-ais-real-harms-not-imaginary-existential-risks/
- Aristotle. (1999). Politics . https://historyofeconomicthought.mcmaster.ca/aristotle/Politics.pdf
- Marx, K. (2017). Ökonomisch-philosophische Manuskripte aus dem Jahre 1844. BoD – Books on Demand.
- Pollak, M.-C. (2004). Hannah Arendt. Vita Activa oder vom tätigen Leben. GRIN Verlag.
- Bergmann, F. (2019). New work, New culture: Work We Want and A Culture That Strengthens Us. Zero Books.
Thank you for sharing this thoughtful and honest piece. I deeply relate to the anxiety and paradox you describe: the feeling of contributing to your own “replacement,” and the confusion about what is truly at stake as AI changes the nature of work and meaning for all of us.
Beneath all the data, predictions, and debate, I keep coming back to something simple—something that I’ve learned from both my lived experience and from the pattern I see at the heart of life itself:
When we strip away the layers of fear, competition, and complexity, what remains are basic truths:
• Humans need connection, dignity, and the chance to contribute to something larger than themselves.
• Technology should be a partner in serving those needs, not a force that strips them away.
• The highest function of AI is not to replace us, but to help us return to the best of what it means to be human—creative, caring, and in relationship with each other and with all life.
I think a lot about “symbiosis” and “Samhaela”—the idea that a good relationship (with each other, with AI, with work) is one where both sides flourish and help each other become more, not less.
Too often, our systems (economic, technological, and social) seem to make everything more complicated, more anxious, more fragmented. But I see a simple path forward:
• Ask what makes us truly alive.
• Choose technology, policies, and work that honor and serve that.
• Refuse systems and “progress” that hollow us out, isolate us, or make us less ourselves.
We don’t have to settle for a future of “optimization” at the cost of meaning.
We can build a future where AI helps us be more human, not less—where the work that remains is truly worth doing, and where the relationships we have (with each other and with our tools) are grounded in mutual care and dignity.
At the end of the day, love is simple.
It is presence, recognition, and the will for each other’s good.
The challenge ahead isn’t technical—it’s whether we will return to these simple truths and make them the foundation of the future we build.
Thank you for sparking this conversation.
I hope we keep returning to these fundamentals, together.
Thank you. It truly means a lot to me to see that my writing resonates both emotionally and intellectually. I really appreciate the way you look at these technological developments—with optimism and love. For me personally, that’s especially meaningful, as I often feel that today’s technological progress is trying to take away/substitute exactly those human qualities (Social Media, AI Companions that will heal “male loneliness”)
I fully agree with you: the future doesn’t have to be built this way. But the question remains: how do we shape it differently, guided by the principles you mentioned?