Worried about how social media shapes public opinion? AI manipulation will make that look like the warm-up.
Trump’s AI deregulation and the launch of Truth Socials new AI chatbot mark a globally significant turning point where AI becomes an invisible ideological force precisely by claiming to have transcended ideology altogether.
The struggle for a fair and truly neutral AI has effectively been abandoned. Instead, we are witnessing the rise of ideological AI, introduced under the banner of “freedom from ideology,” yet guided by political agendas. For many, the contradiction will be hard to grasp, because the influence will rarely appear as blatant statements. It will be embedded in the narratives AI creates and even more in what it leaves unsaid.
How Trump’s 2025 Order Redefines Neutrality and Dismantles Oversight
Two weeks ago, the White House issued the executive order “Preventing Woke AI in the Federal Government,” aiming to block the influence of diversity, equity, and inclusion (DEI) ideologies in AI, which it claims distort facts, manipulate representation, and undermine the reliability of AI outputs.
Framed as a move toward “ideological neutrality” and “truth-seeking,” the order dismantles much of the AI oversight framework established under the Biden administration. Federal agencies are now prohibited from using AI systems that incorporate DEI principles, and any consideration of concepts such as unconscious bias, systemic racism, or intersectionality is labeled as “ideologically biased.”1
The changes extend far beyond federal procurement rules. Companies contracting with the government are no longer required to disclose how their AI models work, what data they were trained on, or the risks they may pose – requirements once central to transparency. Safety and ethics reviews before deployment, previously a safeguard against harmful outputs, have been eliminated. While agencies can still request documentation to verify “ideological neutrality,” the definition is politically loaded: neutrality now means the absence of any programmed attention to diversity or equity, regardless of whether those factors are relevant to accuracy or fairness.
Officially, the order applies only to AI systems used or procured by federal agencies. The U.S. government emphasizes that it does not intend to impose requirements on how models function in the private sector. In practice, however, the impact is likely to extend far beyond that: major U.S. providers often adapt their models to qualify for government contracts, and these changes frequently make their way into the publicly available versions used around the world.
Critics warn that this redefinition aligns with the “anti-woke AI” agenda promoted by figures like Elon Musk, where the removal of so-called “ideological filters” often also removes guardrails designed to prevent violent, discriminatory, or extremist content.2 On paper, the policy claims to remove bias; in practice, it reshapes the very boundaries of what AI can say, privileging a narrow political vision under the guise of objectivity.
Curated Reality and Lossened Safeguards: “Neutral AI” or rather AI Manipulation?
But what does this redefined neutrality look like in action? We don’t have to guess. The President’s own platform offers an early preview. Just a few days ago, Truth Social launched Truth Search AI in beta, partnering with AI startup Perplexity to deliver AI-generated answers with cited sources. While Perplexity provides the technology, Truth Social retains full editorial control over which outlets the AI can access.
Independent testing by WIRED found the system relies heavily on conservative sources like Fox News and Breitbart, reinforcing selective sourcing that mirrors the platform’s political lean.3 The Verge reported that Truth Search AI has downplayed negative information about Donald Trump, even when credible sources exist, suggesting an editorial filter may shape results.4
While this is a private venture with no direct government involvement, the timing is notable: deregulation at the national level coincides with a private platform using AI to curate – and potentially filter – political information for its audience. This is one early example of AI manipulation in practice, where editorial control shapes not just what users read, but how they perceive political reality.
A similar pattern emerges with Elon Musk’s AI chatbot, Grok. Marketed as “neutral” after its guardrails were loosened, Grok stopped filtering out content labeled as progressive or “woke.” In practice, this shift often meant producing answers that aligned with Musk’s own views, sometimes echoing more conservative or “anti-woke” positions.5
Watchdog groups documented Grok validating extremist narratives, such as the false “white genocide” conspiracy in South Africa, and even generating antisemitic slurs and rape fantasies after developers instructed it to avoid “political correctness.”6 These examples illustrate a critical point: removing safeguards in the name of neutrality does not eliminate bias, but it can open the door to AI systems that disproportionately reproduce and legitimise extreme ideological positions. With federal oversight rolled back, such models could increasingly shape public discourse, under the banner of being “truth-seeking” and “ideologically neutral”.
When American AI Shapes Global Minds
US-based AI companies dominate the global market. Their models power apps, search engines, and productivity tools used from Berlin to Nairobi to Jakarta. When US rules change, the ripple effects reach far beyond Silicon Valley.
It’s a mistake to think Trump’s AI agenda will only shape American life. China and the EU, especially Germany, are among the largest consumers of US-built or internationally trained AI systems. These tools are already embedded in:
- Election campaigns in Europe, where AI-generated briefings and microtargeted content are becoming the norm.
- Governance in Africa, where AI supports public administration and service delivery.
- News consumption in Asia, where platforms rely heavily on US-trained models for search and recommendations.
Even if other nations set their own AI standards, those rules matter little if the tools people use daily are built under a deregulated US framework. Political choices in Washington can rewrite the information environment for billions, quietly, invisibly, and without a vote.
From Information Age to Influence Age: When AI Rewrites How We Think
The debate over AI bias exposes a basic misunderstanding: when historical data has systematically excluded women scientists or minority perspectives, adding them is not ideological bias – it’s factual correction. The real fault line is not “left vs. right” but the question of who holds the power to shape how billions of people understand the world. Today’s US-dominated AI systems are deeply embedded in our daily lives, from how we search for information to how we form relationships. As these companies profit from psychological dependency while externalising social costs like addiction, polarisation, and democratic erosion, “editorial freedom” becomes a euphemism for the freedom to exploit. In this context, democratic oversight is not merely a domestic concern, it is a global imperative, essential to safeguarding human flourishing over narrow commercial interests
The data we feed into today’s AI will shape tomorrow’s reality. Train it on a version of history where only men work at NASA, and society will erase the countless women who have been there all along. Feed it racially biased policing data, and automated systems will reproduce and even deepen injustice. Researchers agree: bias is one of AI’s most complex challenges. Now, with oversight dismantled, we risk unleashing models that embed today’s prejudices deep into the foundations of our future.
We’ve already seen how social media can amplify division, distort truth, sway elections, and reshape daily life. AI will push this much further. It will work in the shadows, deciding what we see and don’t see, how we understand the world — and what we’ll never consider.
And because AI’s influence is delivered through private, personalised interactions, its fingerprints are harder to spot. Unlike social media feeds, where trends and narratives play out in public, AI shapes opinions one conversation at a time, invisible to anyone else. We may not notice the shift until it’s already embedded in our collective thinking. So, if you were already worried about how social media influences public opinion, AI will make that look like a mere warm-up. This time, the shift will be silent, seamless, and happening privately within our personal AI conversations.
References
Thanks to Ron Lach @ Pexels for the header image!
- The White House. (2025, July 23). Preventing Woke AI in the Federal Government. The White House. https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/
- Knight, W. (2024, October 30). Elon Musk’s Criticism of “Woke AI” Suggests ChatGPT Could Be a Trump Administration Target. WIRED. https://www.wired.com/llm-political-bias/
- Scrimgeour, G. (2025, August 8). Truth Social’s New AI Chatbot Is Donald Trump’s Media Diet Incarnate. WIRED. https://www.wired.com/story/i-fear-truth-search-ai-might-be-biased-but-it-says-it-isnt/
- Weatherbed, J. (2025, August 7). Truth Social’s new AI search engine basically just pushes Fox News. The Verge. https://www.theverge.com/news/753863/trump-truth-social-ai-search-perplexity-conservative-bias
- Gold, H. (2025, July 8). Elon Musk’s AI chatbot is suddenly posting antisemitic tropes. CNN. https://edition.cnn.com/2025/07/08/tech/grok-ai-antisemitism
- Köver, C. (2025, July 24). Trumps KI-Plan: Ideologisch neutral, nicht „woke“. Netzpolitik.org. https://netzpolitik.org/2025/trumps-ki-plan-ideologisch-neutral-nicht-woke/