The Inevitability Myth: How AI Narratives Erode Democratic Choice

From 'inevitable AI' to doomsday warnings, the tech industry tells powerful stories about our digital destiny. This piece reveals how these narratives influence politics, profits, and who gets to shape our collective future.

The AI industry has successfully sold one of the biggest lies of the decade: that artificial intelligence is a fate. But what is presented to us as a technological force of nature is, in fact, a masterclass of strategic communication. 

We debate how to use AI, when it will arrive, how fast to adapt – but almost never do we ask whether we want it at all. This is particularly noteworthy given the frequency with which the same industry issues warnings that AI could pose an existential threat to humanity. If a technology is repeatedly described by its creators as potentially catastrophic, shouldn’t we question whether it should be developed at all?

Instead of opening up space for political debate about whether and how this technology should be developed, the tension between fear and inevitability leaves us stuck in reaction mode, feeling worried and fascinated, and unsure where democratic agency might begin.

The story of inevitability is just one part of a much bigger web of narratives that shape how we think and feel about AI. We are constantly exposed to stories about existential danger, technological salvation and superhuman intelligence.

Beyond delivering new technologies, Silicon Valley has also been remarkably successful in providing the lens through which we view and understand them. They set the frame within which AI is discussed, regulated, and ultimately accepted. Such frames determine whether automated welfare systems are viewed as neutral efficiency tools or political decisions about who deserves support, whether mass layoffs are perceived as inevitable progress or choices made by those who benefit from them.

In this sense, AI narratives do more than shape opinion; they shape governance. They quietly shift authority away from democratic debate towards corporations and technical elites, determining who gets to decide how these systems are built and deployed.

This article examines four of the most influential stories in today’s AI discourse: the branding of “artificial intelligence”, the myth of inevitability, the doomsday scenario and AI-solutionism, to demonstrate how they operate, the interests they serve and how exposing them can reopen the space for political choice.

“Artifical Intelligence”: How a Marketing Term Shaped Reality

The story of AI narratives begins with the term “artificial intelligence” itself. Although it sounds technical and neutral, the phrase was chosen for strategic reasons that had little to do with scientific precision.

In 1956, mathematician John McCarthy convened the Dartmouth workshop that formally launched the field. But as Forbes notes, there was “no agreement on a general theory of the field” – only a shared vision that computers could perform intelligent tasks. What “intelligence” actually meant remained undefined, leaving participants to pursue divergent approaches without common standards.1

McCarthy later admitted the term was selected for two pragmatic reasons: to distance his work from the “esoteric and narrow” field of automata theory, and “to escape association with ‘cybernetics,'” which he considered misguided. In other words, the term was not a description of a coherent scientific field, but a strategic positioning move designed to differentiate one group of researchers from others and attract funding.2

Today, what is sold as “AI” encompasses a wide range of tools. Some automate consequential decisions, such as screening job applications, approving loans or identifying individuals for welfare or policing purposes (automatic decision systems). Others classify faces, voices and images for surveillance and advertising purposes (classification systems). There are also recommender systems that determine the content we see on platforms such as TikTok, Netflix, and YouTube. And then there is Generative AI: text- and image-generating systems such as ChatGPT, Gemini and Claude, the kind of “AI” that most people associate with the term today.3

As linguist Emily M. Bender and sociologist Alex Hanna demonstrate in their book The AI Con, the term “AI” is not a coherent technical category, but a marketing term that benefits those building technology by suggesting intelligence, agency or human-like understanding.

These systems don’t think or understand; they predict the most likely next words or pixels. However, because they produce fluent and confident-sounding responses, they create a powerful illusion of intelligence. This is a classic example of the ELIZA effect, the tendency to attribute understanding and intent to systems that simply mimic human conversation.

Classifying a conglomerate of very different systems under the authoritative label of “intelligence” obscures how fundamentally different these systems are – and thus what decisions we are willing to delegate to them.

As Bender and Hanna argue, when everything from photo filters to chatbots to systems that decide who receives social benefits is described as “AI,” automation itself begins to feel natural and inevitable. But a mistake in a credit-scoring system is fundamentally different from a poor film recommendation, yet both are dismissed as “AI errors”. This technical framing allows companies to hide behind the idea that “the AI decided”, while obscuring the fact that systems such as welfare algorithms are based on political choices about who is included and who is excluded.

This lack of clarity makes meaningful regulation substantially harder for lawmakers and the public. At the same time, the “one-size-fits-all” label serves a powerful economic function. If everything is labelled as “AI”, which is perceived as complex and expensive, then only major players such as Google, Microsoft and Meta are perceived as capable of developing it. In reality, however, many of these tools could be developed by smaller, specialised companies. As Bender and Hanna argue, this is essentially the “AI con”: the use of conceptual vagueness to mask power, profit and political control.

“Inevitability”: A Prediction Becomes a Self-fulfilling Prophecy

A second core narrative in the AI industry is that of “inevitability”. It is reflected in phrases such as “AI is here to stay”, “you can’t put the genie back in the bottle”, or, more implicitly, in the famous “adapt or get left behind”. Whatever one’s thoughts on artificial intelligence, the message is always the same: its spread is natural, unstoppable and ultimately beyond political control. The only rational response, we are told, is to keep up.

This narrative has deep roots. From the aforementioned Dartmouth conference onward, pioneers such as John McCarthy and Marvin Minsky saw computing as moving along a natural path toward human-level, and eventually superhuman-intelligence.4 In this view, faster hardware and better algorithms would inevitably create machine intelligence. 

This idea was later taken up and popularised by futurists such as Ray Kurzweil, whose books – especially The Singularity Is Near – presented super-intelligent machines not as a possibility but as a historical destiny driven by exponential growth.5

Yet this narrative is not grounded in settled scientific proof, but in long-standing assumptions and projections about what increasingly powerful computation might eventually achieve. As Erik J. Larson and others have argued, this does not mean that artificial general intelligence is impossible, but it does mean that we have no good grounds to assume it must arrive. The leap from “computers are improving” to “machines will inevitably become minds” is not a technical conclusion; it is a philosophical and cultural assumption.6

Recent research by Mark Fisher and John Seaver reinforces this point. Claims about the inevitability of AI are rarely derived from the actual capabilities of current systems. Instead, they rest on deeper, largely unexamined beliefs about what intelligence is and how technological change unfolds over time; beliefs that remain deeply contested even among experts. In this sense, calling AI “inevitable” is not a neutral forecast but a particular interpretation of history and progress presented as fact.7

This legacy lives on in contemporary industry communication, where technological determinism is typically coupled with utopian promises of abundance: AI will transform the economy, solve scarcity and unlock unprecedented prosperity. Framed this way, AI becomes less a set of human-made systems and more a force of nature. The political question of where, how and whether these technologies should be deployed is quietly replaced by a single demand: adapt!

Framed as inevitable, AI appears to advance on its own. In reality, however, it is not abstract technological forces that drive the development of AI, but rather specific companies with vast financial resources, significant political influence and specific strategic objectives. The claim is also performative: when governments, investors and corporations treat AI as inevitable, they make decisions that effectively make it so. Billions in funding, regulatory shortcuts and institutional adoption can transform a prediction into a self-fulfilling prophecy.

Meanwhile, the rhetoric of “adapt or be left behind” acts as a powerful disciplinary tool. Workers are told to upskill for an AI-driven future or risk losing their jobs; countries are warned that they must invest or lose global relevance; and companies are pressured to adopt AI or risk being overtaken by competitors. Responsibility is shifted from systems to individuals, while the underlying power structures that enable this shift disappear from public view. The real force of the inevitability narrative lies in how it shrinks the space for democratic choice: The question of who benefits, who bears the costs and who gets to decide is pushed into the background by a narrative that presents technological futures as destiny rather than human design.

The “Doomsday Narrative”: When Fear Replaces Democracy

Shortly after the release of ChatGPT, a metric began to circulate in AI circles: p(doom) – the estimated probability that artificial intelligence would destroy or fundamentally displace humanity.8 What was once a niche concern confined to science fiction forums suddenly became a topic of serious discussion among researchers, investors and tech executives.

The warnings were by no means marginal. Sam Altman, the CEO of OpenAI, in 2015 said that AI could “most likely lead to the end of the world”. Elon Musk, the CEO of xAI, has repeatedly compared AI to nuclear weapons. Dario Amodei, the CEO of Anthropic, has publicly suggested that there is a significant chance that things could go “really, really badly”.

These concerns have been amplified through open letters, conferences and a growing ecosystem of well-funded “AI safety” organisations focused on the catastrophic and existential risks posed by future superintelligent systems.9 The most prominent example is the “Statement on AI Risk”, issued by the Center for AI Safety in May 2023 and signed by over 350 leading figures. The statement declared that “mitigating the risk of extinction from AI should be a global priority, alongside pandemics and nuclear war”.10

But, what was even more remarkable than the conversation itself was who was driving it. It was not the critics of the technology who were issuing the loudest warnings, but the very people building it.

At first glance, this may sound like responsible foresight. Yet the doomsday narrative also performs a powerful political function. When AI is framed as an existential threat to civilisation, governance shifts from democracy to emergency management. And, by that very logic, authority concentrates in the hands of a small group of highly specialised experts and corporations who present themselves as uniquely capable of handling such danger. AI is thereby moved into the realm of security politics, where secrecy, urgency, and exceptional measures replace ordinary democratic oversight. The result is a quite paradoxical situation in which the same companies that create the risk present themselves as the only ones who can contain it.

This framing is tied to a broader ideological movement: Effective Altruism and its offshoot, Longtermism. These philosophies contend that the moral imperative of our time is to ensure the survival of future generations. From this perspective, even significant current harms, such as labour disruption, environmental damage and social inequality, matter far less than the risk posed by a hypothetical future superintelligence. This worldview is far from marginal; it is deeply embedded in the AI safety ecosystem, with many leaders of major AI labs openly identifying with or funding it.11

The p(doom) lens functions less as a neutral forecast and more as a moral triage: high existential risk estimates justify the deprioritisation of tangible harm, such as job loss or inequality, in favour of hypothetical extinction, thereby effectively ranking abstract future lives as more important than present suffering.

The practical result is a dramatic shift in political attention. Attention and resources flow toward hypothetical future catastrophes, while immediate, measurable harms such as emotional dependency, the transformation of the labour market and environmental costs are dismissed as temporary side effects. 

Finally, this doomsday debate is not socially neutral. It is dominated by a small, highly homogenous group: predominantly white, male and extremely wealthy tech elites from Silicon Valley, whose fears of losing control over their own creations are treated as universal. Other existential threats that shape most people’s lives, from climate breakdown to economic insecurity and political violence, fade into the background. Entirely different ways of understanding risk, from the Global South, from labour movements, from feminist or post-colonial traditions, are rendered invisible.

“AI-Solutionism”: How Engineers Became Social Experts

In the AI-solutionist narrative, AI is presented as a universal problem-solver, capable of tackling everything from climate change and disease to education, bureaucracy, and even democracy itself.

The roots of AI-solutionism lie in the mid-20th-century concept of the “technological fix” coined by Alvin Weinberg. This idea promised to solve social problems through engineering rather than politics.12 Born out of Cold War–era systems thinking, this idea was adopted by institutions like the RAND Corporation, which modelled societies, economies, and conflicts as problems of calculation and control.13 In the digital age, this logic reappeared as what Evgeny Morozov calls technological solutionism: the attempt to replace democratic judgement with data, metrics, and algorithmic optimisation.14

This same logic now manifests in contemporary AI narratives. In a recent New York Times article, Morozov himself traces how advocates of artificial general intelligence advance sweeping promises to “boost scientific knowledge,” “turbocharge the economy,” and “elevate humanity by increasing abundance”; to the point that “not using it to save the world seems immoral.”15

And once politics is successfully rebranded as an optimisation problem, deploying technical solutions is no longer just an option but a moral obligation. This framing quietly turns political and social conflicts into engineering problems. If poverty, inequality, or climate change can be “solved” by better algorithms, then questions about power, redistribution, and collective choice slip out of view. Complex and contested social realities are reimagined as data flows waiting to be optimized.

For technology companies, this narrative is extraordinarily useful. It portrays them not only as product suppliers, but also as crucial partners in governing the future. What this narrative leaves out is who is actually doing this “governing”: software engineers, data scientists, product managers and corporate executives: people trained to optimise systems and scale products, not resolve social conflicts, weigh ethical trade-offs or make democratic decisions. 

Yet, under the banner of AI-driven solutions, their technical judgements are increasingly treated as legitimate forms of social expertise. This not only sidelines democratic debate, but also flattens ethical complexity by privileging what can be measured and optimised over diverse values, contested meanings, and participatory forms of judgement.16

But, when social problems are framed as optimisation tasks, fundamental conflicts of interest disappear. Poverty is no longer seen as the result of power relations and distributional issues, but as an “efficiency problem” instead. Climate change becomes a technical challenge rather than a question of consumption, growth and global justice. 

Techno-solutionism is also always a project of datafication. In order to be “optimised”, problems must first be made measurable. This results in a significant increase in monitoring and control, presented as benevolent assistance. Poverty reduction becomes biometric databases. Education becomes continuous learning surveillance. Healthcare becomes genetic and behavioural tracking.17

This narrative also fails to mention how AI is actually being developed and deployed today. Despite the rhetoric of “AI for good”, the systems that are being scaled up and monetised overwhelmingly serve advertising, data extraction, automation and generative content, applications that concentrate power and profit rather than addressing structural social needs. Although promising uses in healthcare, climate research or public services do exist, they are not currently driving investment, infrastructure or corporate strategy.

Ultimately, when “AI” is presented as the solution, other ways of addressing social problems, such as community-based approaches, cooperative organisations and unions, grassroots movements and redistribution, are pushed out of focus. These alternatives slip out of the realm of the possible. The only debates left are about which AI system to deploy, never whether there might be fundamentally different paths forward.

Reclaiming Democratic Agency

The greatest threat posed by artificial intelligence is not its potential power, but the manufactured belief that we are powerless to stop it. The moment we accept it as destiny is the moment we surrender our democratic right to imagine – and build – a different future.

All four narratives analysed in this article form a closed, mutually reinforcing system. The vague term “AI” makes it difficult to regulate or resist specific applications. The rhetoric of inevitability forecloses democratic debate before it begins. The doomsday scenario concentrates authority in the hands of those building the technology. And techno-solutionism reframes every remaining political question as a technical problem that only they can solve. Together, they create a discursive trap: they make us feel like we are spectators of a future that has already been decided upon us. Deliberation is replaced by fear and awe. Adaptation replaces consent.

However, “artificial intelligence” is not a force of nature. It is a set of systems created by humans, developed by organisations, funded by investors, influenced by legislation, and used within political economies. This means it can be questioned, redirected, limited or rejected.

Democratic societies are under no obligation to accept all forms of automation, data practices or algorithmic decision-making. We can decide where automation is appropriate and where it is taking jobs and opportunities away from already marginalised groups. We can choose which areas of society should remain human-governed and which technologies are compatible with dignity, solidarity and justice.

The story of “AI” does not have to be a prophecy written by a few; it can be a story of democratic agency and peaceful digital futures — if we choose to write it.

References

  1. Press, G. (2016, August 28). Artificial Intelligence Defined As A New Research Discipline: This Week In Tech History. Forbes. https://www.forbes.com/sites/gilpress/2016/08/28/artificial-intelligence-defined-as-a-new-research-discipline-this-week-in-tech-history/
  2. Nilsson, N. (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements. https://ai.stanford.edu/~nilsson/QAI/qai.pdf
  3. Bender, E. M., & Hanna, A. (2025). The AI Con. HarperCollins.
  4. Wikipedia Contributors. (2024, October 27). Existential Risk From Artificial Intelligence. Wikipedia; Wikimedia Foundation. https://en.wikipedia.org/wiki/Existential_risk_from_artificial_intelligence
  5. Dembski, W. A. (2021, April 19). Artificial Intelligence: Unseating the Inevitability Narrative. Science and Culture Today. https://scienceandculture.com/2021/04/artificial-intelligence-unseating-the-inevitability-narrative/
  6. Larson, E. J. (2021). The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. The Belknap Press of Harvard University Press.
  7. Fisher, M., & Severini, J. (2025). Making AI Inevitable: Historical Perspective and the Problems of Predicting Long-Term Technological Change. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5404686
  8. Klein, E. (2025, October 15). Opinion | How Afraid of the A.I. Apocalypse Should We Be? The New York Times. https://www.nytimes.com/2025/10/15/opinion/ezra-klein-podcast-eliezer-yudkowsky.html
  9. Vincent, J. (2023, May 30). Top AI researchers and CEOs warn against “risk of extinction” in 22-word statement. The Verge. https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai
  10. Center for AI Safety. (2025). Statement on AI Risk | CAIS. Center for AI Safety. https://aistatement.com/
  11. Gebru, T. (2022, November 30). Effective Altruism Is Pushing a Dangerous Brand of “AI Safety.” Wired. https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/
  12. Sætra, H. S., & Selinger, E. (2024). Technological Remedies for Social Problems: Defining and Demarcating Techno-Fixes and Techno-Solutionism. Science and Engineering Ethics, 30(6). https://doi.org/10.1007/s11948-024-00524-x
  13. Engerman, David C. (2010). Social Science in the Cold War. Isis, 101(2), 393–400. https://doi.org/10.1086/653106
  14. Morozov, E. (2013). To Save Everything, Click Here : Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist. London Penguin Books.
  15. Morozov, E. (2023, June 30). The True Threat of Artificial Intelligence. The New York Times. https://www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html
  16. Nutas, A. (2024). AI solutionism as a barrier to sustainability transformations in research and innovation. GAIA – Ecological Perspectives for Science and Society, 33(4), 373–380. https://doi.org/10.14512/gaia.33.4.8
  17. Mejias, U. A., & Couldry, N. (2019). Datafication. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1428
Picture of Alissa Chmiel

Alissa Chmiel

Alissa Chmiel is the founder of Digital Peace and a PhD candidate researching cognitive resilience in the digital age. In her writing for Digital Peace, she explores the complex intersections of technology, society, democracy, and peace, through a gender-aware and power-sensitive lens. Her work combines critical reflection with a deep curiosity about what it means to remain human in an increasingly digital world.

Join the Discourse

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Your Monthly Brief on Technology, Power & Peace

Technology reshapes conflicts, democracy and humanity in real-time. Are you tracking its impact?

Start tracking technology’s impact on peace and democracy.

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.