From bias to deepfakes, AI harms women in structural ways that mirror existing inequalities. Technology built to enhance human potential continues to reinforce gendered power and exclusion.
When individuals used technology to share paywalled knowledge, it was condemned as piracy. When AI models now scrape, summarize, and reproduce that same information without paying for it, it’s celebrated as innovation. This double standard exposes a deeper contradiction in how we define ownership, creativity, and fairness online. Drawing on examples from Sci-Hub to ChatGPT, the article traces how AI has become the new middleman in the struggle between access and control, where the same act that once got people sued now fuels billion-dollar industries. It asks a pressing question: if technology can bypass the walls of information, who truly benefits: those seeking knowledge, or those selling it?
The US just announced its largest monetary seizure ever, worth more than $15 billion in bitcoins. The targeted entity is an investment conglomerate known as Prince Holding Group, responsible for running more than 10 scam compounds in Cambodia and for conducting international money laundering operations. This article explores the phenomena of scam cities, the role of cryptocurrencies in facilitating international money laundering and how blockchain can be used as a tool for accountability.
While public debate fixates on deepfakes and election interference, a more pervasive force is emerging: not through propaganda, but through conversation. As millions engage with Large Language Models every day, asking for advice, explanations, and emotional reassurance, AI is no longer just informing, it is influencing.
From coding to content creation, automation is reshaping not just how we work — but why we work. The real danger of AI may not be unemployment, but the slow unmaking of what it means to be human. Through the story of a developer training the AI that will one day replace him, this article explores the human cost of automation. Drawing on thinkers from Aristotle to Arendt, it examines how technology erodes meaning, identity, and connection, and asks what happens when humans no longer stand at the centre of their own creation.
In Nepal, the line between connection and control blurred when the government banned 26 social media platforms. What followed was a youth-led uprising that revealed deep fractures in its fragile democracy. This article reflects on how digital tools can empower or silence, and why Nepal’s struggle offers lessons for a world wrestling with the politics of technology.
A new report published by Europol highlights the increasing threat coming from the use of AI in organized crime. Deepfake extortions, politically motivated cyberattacks and targeted data theft: cybercrime is evolving at a faster pace than Artificial Intelligence regulation. This article explores how organized crime is implementing Generative Artificial Intelligence to expand and professionalize illicit activities worldwide, and the massive human and economic costs that come with it.
From a silenced keynote at the UN’s AI for Good Summit to $47 billion in defense contracts, Big Tech’s entanglement with AI warfare reveals a troubling reality: the same firms promoting “AI for Good” are profiting from its use in war. Behind the glossy rhetoric of ethics and innovation lies a structural paradox, when profit depends on military AI, peace itself becomes a threat to business. This article explores how tech giants have entered the military-industrial complex, why their profit motives create a “peace trap” where conflict fuels innovation, and what it would take to realign AI with the pursuit of peace.
From Musk’s vulgar dismissal of the EU Commissioner to Europe’s reliance on Google Cloud and Chinese 5G, the story of European Digital Dependency is one of sovereignty without power. This article explores how regulations like the DSA clash with harsh realities: Europe depends on the very technologies it seeks to govern. From military contracts to social media platforms shaping elections, it traces how foreign tech has become embedded in Europe’s democratic core, and what pathways exist to reclaim digital resilience.

Food For Thought

Every system reaches a point where its makers lose the ability to contain it. Oppenheimer faced it in 1945. Artificial intelligence is moving toward the same line, where control slips and responsibility spreads to those forced to live with the consequences.
In 1943, a German pilot spared a shattered Allied bomber, choosing mercy where orders demanded execution. AI in warfare would not have paused. It would have scanned, confirmed, and fired, not from hatred but from code. Humans still draw fragile lines in war: a flag, a hand, a refusal. Machines do not see lines, only patterns, and once flagged as enemy, context collapses.