While public debate fixates on deepfakes and election interference, a more pervasive force is emerging: not through propaganda, but through conversation. As millions engage with Large Language Models every day, asking for advice, explanations, and emotional reassurance, AI is no longer just informing, it is influencing.
From coding to content creation, automation is reshaping not just how we work — but why we work. The real danger of AI may not be unemployment, but the slow unmaking of what it means to be human. Through the story of a developer training the AI that will one day replace him, this article explores the human cost of automation. Drawing on thinkers from Aristotle to Arendt, it examines how technology erodes meaning, identity, and connection, and asks what happens when humans no longer stand at the centre of their own creation.
In Nepal, the line between connection and control blurred when the government banned 26 social media platforms. What followed was a youth-led uprising that revealed deep fractures in its fragile democracy. This article reflects on how digital tools can empower or silence, and why Nepal’s struggle offers lessons for a world wrestling with the politics of technology.
A new report published by Europol highlights the increasing threat coming from the use of AI in organized crime. Deepfake extortions, politically motivated cyberattacks and targeted data theft: cybercrime is evolving at a faster pace than Artificial Intelligence regulation. This article explores how organized crime is implementing Generative Artificial Intelligence to expand and professionalize illicit activities worldwide, and the massive human and economic costs that come with it.
From a silenced keynote at the UN’s AI for Good Summit to $47 billion in defense contracts, Big Tech’s entanglement with AI warfare reveals a troubling reality: the same firms promoting “AI for Good” are profiting from its use in war. Behind the glossy rhetoric of ethics and innovation lies a structural paradox, when profit depends on military AI, peace itself becomes a threat to business. This article explores how tech giants have entered the military-industrial complex, why their profit motives create a “peace trap” where conflict fuels innovation, and what it would take to realign AI with the pursuit of peace.
From Musk’s vulgar dismissal of the EU Commissioner to Europe’s reliance on Google Cloud and Chinese 5G, the story of European Digital Dependency is one of sovereignty without power. This article explores how regulations like the DSA clash with harsh realities: Europe depends on the very technologies it seeks to govern. From military contracts to social media platforms shaping elections, it traces how foreign tech has become embedded in Europe’s democratic core, and what pathways exist to reclaim digital resilience.
From Zoom diplomacy during COVID-19 to ongoing negotiations in fragile states, digital tools have changed how peace is built. Yet no screen can replace the rituals of presence — the handshakes, shared spaces, and subtle trust that sustain peace. This article explores hybrid peacebuilding as a way to merge digital inclusion with the irreplaceable power of face-to-face interaction, balancing opportunity with risk in the digital age.
Pegasus spyware Europe has turned smartphones into weapons of surveillance, piercing the heart of privacy and trust. What once seemed like the tool of authoritarian regimes has quietly entered Europe’s democratic core. From Poland’s elections to Spain’s independence movements and Germany’s secret purchase, spyware is no longer just a foreign policy concern - but a domestic democratic crisis. This article explores how Pegasus spyware reshapes European democracy, undermining press freedom, competitive elections, and the citizen–state relationship itself.
AI is becoming an unseen editor of reality itself. With US oversight dismantled and “anti-woke” neutrality redefined, political agendas can now guide what AI says, what it omits, and how billions worldwide come to understand the world.
In today’s landscape of information warfare, where war influencing has become a prevalent force, using digital communication to divide, manipulate, and destabilise; peace influencing could serve as a vital counterstrategy. This article explores digital peace through the lens of peace journalism and media theory, tracing how narratives shape conflict and cohesion alike. It examines the historical role of media in war, the transformation brought by social platforms, and how strategic communication is now being reclaimed to foster dialogue, empathy, and democratic resilience. From global grassroots efforts to evolving norms in digital communication, it argues that influencing for peace is no longer idealistic, but necessary.

Food For Thought

Every system reaches a point where its makers lose the ability to contain it. Oppenheimer faced it in 1945. Artificial intelligence is moving toward the same line, where control slips and responsibility spreads to those forced to live with the consequences.
In 1943, a German pilot spared a shattered Allied bomber, choosing mercy where orders demanded execution. AI in warfare would not have paused. It would have scanned, confirmed, and fired, not from hatred but from code. Humans still draw fragile lines in war: a flag, a hand, a refusal. Machines do not see lines, only patterns, and once flagged as enemy, context collapses.