When we talk about the intersection between AI and democracy, we tend to focus on the spectacular threats: deepfakes flooding social media, AI-generated propaganda, election manipulation campaigns. But while these are significant dangers worth our attention, much less attention is given to a more subtle, and perhaps more pervasive, form of AI influence, the conversations we have with GenAI themselves.
We are living through a quiet revolution in how we think. Millions now turn to AI to write their emails, draft presentations, research political issues, and form opinions. We’ve welcomed these tools into the most intimate spaces of our cognition. And while we know, in theory, that AI outputs aren’t reliable, we treat them as epistemic truth anyway. “ChatGPT said” and “I asked ChatGPT about this” have become ubiquitous phrases, even among highly educated people who should know better.
One telling example: In 2024, Deloitte – one of the Big Four consulting firms – delivered AI-generated reports containing fabricated case studies and false citations to the Australian government. This was not a careless undergraduate submitting an AI-written paper, but a global consultancy in an official government contract. The reason is simple: LLMs generate text that sounds convincing, and because humans are wired to treat fluent language as a proxy for truth, we lower our critical defenses.
But the risk of attributing epistemic truth to GenAI goes far beyond the well-known problem of hallucinations. These systems can be used to systematically influence our psychological states, political beliefs, and trust in information itself. Unlike a deepfake, this AI influence is invisible. It accumulates over time. It doesn’t present itself as an attack, it presents itself as assistance. And it goes completely unnoticed by the public.
Sounds like conspiracy theory? Well, here are five scientific findings that prove how this manipulation works:
The Invisible Influence: How Conversational AI is Shaping Our Reality
Recent research has begun to uncover the mechanisms by which AI influence is shaping our thoughts, emotions, and even our political beliefs. Here are five key findings that everyone should be aware of:
1. Emotional and Psychological Profiling
LLMs possess an unprecedented ability to infer psychological traits from text. Studies have shown that models can assess Big Five personality traits with high accuracy, and even detect emotional states from written language. This capability allows malicious actors to create detailed psychological profiles of their targets, enabling highly personalized and effective manipulation campaigns. 1 2
2. Persuasion and Influence
LLMs can be trained to generate highly persuasive content that leverages established principles of influence, such as Cialdini’s principles of persuasion. By crafting messages that appeal to authority, social proof, scarcity, and other psychological triggers, attackers can manipulate users into taking specific actions, such as clicking on malicious links, revealing sensitive information, or adopting certain beliefs. 3The ability of LLMs to generate personalized and contextually relevant persuasive content at scale makes this a particularly potent threat.
3. Emotional Manipulation
Recent research has demonstrated that LLMs can be used to manipulate users’ emotions through carefully crafted prompts and conversational strategies. By generating empathetic and emotionally resonant language, these models can build false rapport and trust with users, making them more susceptible to manipulation. This “emotional intelligence” of LLMs can be weaponized to exploit vulnerabilities, create dependency, and influence decision-making in a way that benefits the attacker.4
4. Shifting Political Opinions
Recent research by Fisher et al. (2025) at the University of Washington demonstrated that biased AI chatbots can directly shift political opinions through brief interactions. Their study recruited 299 self-identified Democrats and Republicans to engage with three versions of ChatGPT: a base model, one with explicit liberal bias, and one with conservative bias. Participants completed political decision-making tasks on obscure topics and budget allocation exercises. The results revealed that both Democrats and Republicans shifted their political positions toward the bias of the chatbot they interacted with, regardless of their initial partisan affiliation. 5
5. Engineered Bias
Yang et al. (2024) conducted the most extensive cross-model comparison to date, examining political biases in LLMs from different regions including the U.S., China, Europe, and the Middle East. The research revealed that political biases in LLMs evolve with model scale, release date, and regional factors, suggesting that these biases can be deliberately engineered or inadvertently amplified through training processes.6
Additional research shows how such bias can be systemically embedded through targeted training data manipulation, prompt engineering, and fine-tuning processes. 7 8
The Geopolitical Reality: Why This Potential Won’t Go Unused
LLMs hold extraordinary potential for political manipulation. This becomes particularly dangerous in the context of a global AI race between the only two AI superpowers: the United States and China. Given their current political agendas and well-established involvement in cognitive warfare, it is highly unlikely this potential will go unused. These tools are not just future weapons, they are already being used in influence operations. What remains uncertain is not if they will be deployed, but how systematically and how far they will be allowed to shape public understanding without democratic oversight (See our related piece on AI manipulation here).
So what do we do about it?
First, we become aware. Awareness is the first democratic line of defense. We need to talk about this openly, make it part of public discourse. We rarely think or talk about conscious AI manipulation at all. We need to recognize it as the significant threat to democratic decision-making that it is.
Second, we make it a core pillar of AI literacy. AI literacy is oftentimes limited to teaching people how to use GenAI or even agentic AI for specific purposes. But it should be about understanding how these systems actually work; that they are not humans, that they produce statistical language rather than truth, that they hallucinate confidently, and that systems like these are capable of shaping our thoughts, beliefs, and decisions at a cognitive level. Educational institutions, media literacy programs, and public information campaigns must address this directly. And again, this awareness must go beyond the concept of hallucinations.
Third, we stay conscious in our interactions, especially regarding political information. Ideally, we shouldn’t turn to LLMs for political information at all. But if we do use these tools, we must recognize the political agendas embedded in their outputs. Consider DEI principles, for instance: they can invisibly guide what becomes thinkable and what disappears from our cognitive horizon, even when you’re not discussing anything explicitly political. When you ask an AI to help draft an email, write a presentation, or explain a concept, you’re engaging with a system trained on biased data, that can be deliberately manipulated, and that is designed to sound authoritative whether or not it’s accurate.
The core democratic risk of LLMs lies not in their capacity for error, but in their illusion of truth. When an AI can converse with us in a way that is empathetic, personal, and persuasive, we lower our critical guard. If AI can influence not through spectacle but through intimate, everyday dialogue, then staying conscious becomes an act of democratic resistance.
It’s time to rethink the future we’re building.
References
- Hani, U., Osama Sohaib, Khan, K., Asma Aleidi, & Islam, N. (2024). Psychological profiling of hackers via machine learning toward sustainable cybersecurity. Frontiers in Computer Science, 6. https://doi.org/10.3389/fcomp.2024.1381351
- Tshimula, J. M., Nkashama, D. K., Muabila, J. T., Galekwa, R. M., Kanda, H., Dialufuma, M. V., Didier, M. M., Kalala, K., Mundele, S., Lenye, P. K., Basele, T. W., & Ilunga, A. (2024, August 9). Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features. Arxiv.org. https://arxiv.org/html/2406.18783v3
- Singh, S. U., & Namin, A. S. (2025). The influence of persuasive techniques on large language models: A scenario-based study. Computers in Human Behavior: Artificial Humans, 6, 100197. https://doi.org/10.1016/j.chbah.2025.100197
- Vinay, R., Spitale, G., Biller-Andorno, N., & Germani, F. (2025). Emotional prompting amplifies disinformation generation in AI large language models. Frontiers in Artificial Intelligence, 8. https://doi.org/10.3389/frai.2025.1543603
- Fisher, J., Feng, S., Aron, R., Richardson, T., Choi, Y., Fisher, D. W., Pan, J., Tsvetkov, Y., & Reinecke, K. (2025). Biased LLMs can Influence Political Decision-Making. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 6559–6607. https://doi.org/10.18653/v1/2025.acl-long.328
- Yang, K., Li, H., Chu, Y., Lin, Y., Peng, T.-Q., & Liu, H. (2024). Unpacking Political Bias in Large Language Models: A Cross-Model Comparison on U.S. Politics. Arxiv.org. https://arxiv.org/html/2412.16746v3
- Motoki, F. Y. S., Pinho Neto, V., & Rangel, V. (2025). Assessing political bias and value misalignment in generative artificial intelligence. Journal of Economic Behavior & Organization, 106904. https://doi.org/10.1016/j.jebo.2025.106904
- Rettenberger, L., Reischl, M., & Schutera, M. (2025). Assessing political bias in large language models. Journal of Computational Social Science, 8(2). https://doi.org/10.1007/s42001-025-00376-w
Very interesting. So much I didn’t know about the AI craze.