#AIForGood — or just Good For the Economy?

From healthcare to peacebuilding, AI holds real promise for the public good. But in 2025, the data paints a different picture: while economic applications are rapidly scaled and funded, socially driven AI remains limited in scope, often sidelined or under-resourced. This article takes a closer look at where AI is actually being deployed — and what that reveals about our collective priorities, power structures, and the kind of future we’re building.

Navigate

Artificial Intelligence has been celebrated as both an economic revolution and a powerful tool for addressing humanity’s greatest challenges.

This month at Digital Peace, we’ve taken a closer look at some of the less comfortable truths behind AI: We talked about sustainability in the age of automation. We explored what happens when emotional connection is simulated by machines. And we published our first guest article – a sharp analysis on AI Inequality by Ananthu Anilkumar.

Each of these raised the same underlying question: Are we truly developing AI to serve humanity – or simply to fuel the next cycle of economic expansion?

To move beyond assumptions, I turned to the data. Rather than relying on forecast-based projections, I focused exclusively on what is actually happening. If we want to understand AI’s true impact, we need to look beyond the promises — and into the patterns. In this regard, the analysis focuses on five domains where real-world data is available: productivity, medical applications, environmental impact, human rights, and peacebuilding — the very areas at the heart of the #AIForGood narrative, which frames AI as a solution to global challenges and a partner to the UN Sustainable Development Goals.

What emerges is a stark imbalance:

  • Economic applications of AI dominate both investment and implementation.
  • “AI for Good” initiatives remain limited in scope, slow to scale, and underfunded.
  • The environmental and human rights externalities of AI deployment raise serious questions about its net societal benefit.

This doesn’t mean AI is inherently profit-driven. But its current deployment patterns reflect a clear preference for commercial efficiency over public value, equity, and democratic oversight.

And while the reality of AI deployment is complex and nuanced — with some economic applications indirectly benefiting the public, and this analysis covering just five of many possible domains — the data we reviewed tells a clear story:

In 2025, AI is still primarily scaled for profit, not for people.

Two Distinct Sets of Logics

Before we can evaluate AI’s true societal impact, we must distinguish between two fundamentally different operational logics that drive technology deployment:

AI Serving the Economy optimizes profit margins and shareholder value through private-sector implementations, with success measured by cost reduction, revenue growth, and competitive advantage, primarily accountable to investors and stakeholders. 

AI Serving the Public Good focuses on reducing working hours, improving quality of life, and redistributing benefits through public infrastructure and care sectors, with success defined by social outcomes like dignity, accessibility, and equity, remaining accountable to affected communities and populations.

Part of the challenge in distinguishing between AI for economic gain and AI for public good lies in how we measure impact. Economic benefits are relatively easy to quantify—cost savings, productivity boosts, revenue growth, market share. These metrics fit neatly into spreadsheets and quarterly reports. Social benefits, however, resist such easy quantification. How do we quantify restored dignity, prevented discrimination, or lives saved through early warning systems? This measurement asymmetry creates a systematic bias in how we evaluate and prioritize AI applications. Not because social impact is less important, but because it is less visible to the systems driving investment and scale.

Another crucial distinction lies in where AI systems are deployed. While AI serving economic interests is embedded in private-sector workflows—streamlining supply chains, automating tasks, boosting corporate efficiency; AI serving public good, needs to be implemented in public infrastructure, social services, and care sectors—spaces often underfunded and unsuited to profit-based metrics.This creates a structural challenge: the actors with the greatest resources to develop and deploy AI (private companies, wealthy nations) have different incentives than those who might benefit most from AI’s application to public challenges (under-resourced communities, developing countries, marginalized populations).

The Productivity Paradox – Modest Gains, Massive Investment

One of the most comprehensive sources on AI productivity is the February 2025 report by the St. Louis Federal Reserve, it found that workers using generative AI save, on average, 5.4% of their weekly work hours — roughly 2.2 hours. Across the entire workforce, including non-AI-users, that drops to just 1.4% of total hours worked. In macro terms, this translates to a 1.1% increase in aggregate productivity. 1 Measurable and real — but far from transformative. For context: during the internet boom of the 1990s, annual productivity growth averaged 3%.2

Meanwhile, McKinsey’s January 2025 report reveals a striking disconnect between investment and realised outcomes. Although 92% of companies plan to increase their AI investments, only 1% of leaders consider their companies “mature” in AI use — meaning AI is fully integrated and driving substantial business outcomes. 3

Despite $109.1 billion in U.S. private AI investment in 2024 and 78% of organizations reporting AI use,4 only 1% have figured out how to make it work effectively. The remaining 99% are still experimenting. Notably, the report also finds that employees are using generative AI more than executives realise, indicating a bottom-up adoption rather than a coherent strategy.

In short: AI is boosting productivity, but the returns are modest and reflect a system still in its early, experimental stage.

Even where AI is generating business returns, the results remain moderate. According to the Stanford AI Index Report 2025,

  • 71% of companies using AI in marketing and sales report revenue increases,
  • 63% in supply chain management,
  • and 57% in customer service

However, in most cases, the reported revenue growth remains below 5%.

In terms of cost savings,

  • 49% of companies using AI in service operations report reductions,
  • 43% in supply chains,
  • and 41% in software development.

Again, the majority of these savings are under 10%.

While these gains are indeed substantial — they still remain relatively far from the exponential transformations often promised in AI narratives.

There is little evidence that the productivity gains — nor the broader business returns documented in 2025 — are being reinvested to improve public wellbeing, whether through reduced working hours, greater accessibility, or strengthened public services. In fact, the opposite appears to be true.

According to EY, 97% of senior business leaders report a positive return on investment from AI and state that they are reinvesting primarily in internal transformation, productivity, and market expansion.5 Businesswire similarly reports that AI-driven savings are being channelled back into expanding infrastructure and scaling operations — all part of strategic efforts to maintain and reinforce competitive advantage.6

These reinvestments reflect a broader trend: efficiency gains are not being shared — they are being consolidated.

This trend is not just corporate — it’s global. As we explored in last week’s article on AI Inequality , more than 50% of all AI investment in the past decade has been concentrated in North America alone. Only eleven countries have invested more than $3 billion in AI — leaving most of the world excluded from both the benefits and decisions shaping the future of the technology.

Finally, it is worth noting that what companies define as “efficiency” might not be what individuals or society would consider beneficial.

Take customer service, for example. From a corporate perspective, replacing human agents with AI chatbots is “efficient”: it reduces labour costs, scales faster, and operates 24/7. But from the customer’s perspective, it often leads to longer resolution times, repetitive interactions, and unresolved frustration. What improves the balance sheet can simultaneously degrade the user experience.

Efficiency, in this context, doesn’t necessarily mean doing things better, it most often simply means doing them cheaper.

🧩 Key Insights

  • Big investment, small gains: AI’s returns remain modest relative to spending, reflecting an early stage of implementation.
  • Efficiency is consolidated, not shared: Gains stay within corporations.
  • Corporate efficiency ≠ user benefit: What’s efficient for companies may frustrate customers.

Medical Applications – Significant Impact, Uneven Access

Of all domains studied, healthcare stands out as one of the few areas where AI is already delivering measurable, human-centred impact.

According to Harvard Medical School, the OpenEvidence app — a clinical decision support app developed with input from Harvard Medical School faculty and launched from the Mayo Clinic Platform Accelerate program —for example allows clinicians to query medical databases, synthesise information, and make decisions in seconds — reducing a two-hour research process to just 15 seconds. The tool enables real-time, evidence-based medicine without disrupting patient interaction.7

Meanwhile, the FDA approved 223 AI-enabled medical devices in 2023 — up from just six in 2015 — confirming that AI has moved beyond theory and into practice. AI is now integrated into diagnostics, treatment planning, imaging, and workflow automation across major health systems. 8

Even in early 2025, some generative AI systems outperformed human clinicians in specific, time-constrained diagnostic tasks. These improvements are not speculative — they are being documented and deployed today.

These breakthroughs are very promising, there are however accompanied by several serious limitations.

Current AI medical systems still rely heavily on datasets that reflect existing social and racial biases. These biases can translate into unequal treatment recommendations or flawed risk assessments, especially for marginalised groups. AI hallucinations — confidently incorrect outputs — also pose safety risks, particularly when used without human oversight.

Moreover, access to these tools is still shaped by resource concentration. Most deployments are occurring in well-funded urban hospitals, while rural or underfunded clinics remain left out. The technology may be scaling — but not equitably.


🧩 Key Insights

  • Doctors benefit from faster diagnostics and improved tools.
  • AI is transforming medical practice — but not healthcare systems.
  • Automation bias in AI risks turning flawed patterns into clinical decisions — with real consequences for equity in healthcare.
  • Further structural issues like unequal access, and over-centralisation limit the public health impact.

Environmental Impact – Invisible Costs, Tangible Consequences

While there are promising applications of AI in climate action, many of them rely heavily on speculative data and projected benefits. Despite dominant narratives — such as AI helping to optimise energy grids or reduce emissions in transportation — there is little concrete evidence that these benefits currently outweigh the technology’s accelerating resource demands.

In fact, the opposite trend is more immediately visible. In 2025, power demands from North American data centres more than doubled — a surge largely driven by generative AI. Each ChatGPT query now consumes roughly five times more electricity than a standard web search. And for every kilowatt-hour used, data centres require around two litres of water for cooling. 9

Recent reporting by Bloomberg News adds a geographic dimension: two-thirds of all new AI data centres built since 2022 have been constructed in regions already facing water stress. The dilemma is structural — many of the areas richest in renewable energy, particularly solar, are also the driest. 10

We have explored the complex tension — between AI’s potential in environmental innovation and its growing footprint — in more depth in our dedicated article here.


🧩 Key Insights

  • AI’s current environmental costs are substantial — and accelerating.
  • Current deployment patterns favour scale over sustainability. So far, AI’s resource demands outweigh its contributions to climate solutions.

Human Rights — Principles vs. Practice

AI is frequently framed as a tool to promote fairness and justice — with promises to reduce bias, improve access, and protect vulnerable populations. But in 2025, these promises are often failing to translate into practice.

The LCO/OHRC Human Rights AI Impact Assessment (2024/2025) documents a worrying implementation gap. AI systems deployed in law enforcement, border control, and public services have been found to reinforce — rather than reduce — structural discrimination. Marginalised groups remain disproportionately affected by algorithmic bias, especially in contexts where decisions are automated without transparency, appeal mechanisms, or meaningful oversight.11

These aren’t theoretical risks. The report outlines concrete examples of AI systems creating measurable human rights harms: discriminatory policing patterns, inaccessible public services, opaque welfare decisions. And crucially, most of these systems are being implemented without robust rights-based evaluations — often bypassing the very communities they affect.

Even where risk awareness exists, action is slow. Many institutions lack the technical capacity, legal clarity, or political will to apply rigorous human rights safeguards to AI systems. Meanwhile, tech vendors continue to scale deployments globally, with little accountability for downstream social consequences. Those affected are left to deal with the fallout — not those who profit.

If AI is to serve human dignity, the burden of proof must lie with those deploying it — not with those harmed by it.


🧩 Key Insights

  • Well-documented gap: AI’s human rights principles often fail in practice — resulting in discriminatory policing, opaque welfare systems, and inaccessible services.
  • Accountability lacking: Most systems are deployed without transparency, community input, or enforceable safeguards.
  • Justice requires proof of care: The burden must lie with those deploying AI — not those harmed by it.

Peacebuilding — Genuine Impact, Limited Scale

Among the more hopeful areas of AI application in 2025 is its role in peacebuilding. Here, we find concrete use cases that go beyond hype — from conflict monitoring to inclusive dialogue facilitation. But these examples, while promising, remain limited in scale and reach.

Case studies from the University of Birmingham document how AI has been used to support ceasefire monitoring in Ukraine and Yemen, combining satellite imagery with cryptographic verification. In Sudan, AI-enhanced dialogue platforms have enabled over 6,500 citizens — including those from marginalised groups — to participate in shaping political outcomes.12 The African Union has begun integrating predictive AI tools into its Continental Early Warning System, aiming to detect emerging conflicts before they escalate.13

These are powerful examples of AI being used not to optimise profit, but to protect lives and foster dialogue. And yet, they stand in stark contrast to the scale of AI deployment elsewhere. Compared to the billions spent on AI for sales automation or supply chain management, peacebuilding receives a fraction of funding and attention.

The AU Peace and Security Council has rightly raised concerns about this imbalance, warning that without inclusive governance, AI risks reinforcing geopolitical inequalities rather than solving them. In many regions, AI is not a neutral tool; it’s an imported system, shaped by interests far from the communities it affects.

If we can build AI to predict consumer behaviour with extreme precision, surely we can invest the same effort in predicting — and preventing — violence.


🧩 Key Insights

  • Peacebuilding shows AI’s real potential to serve human security.
  • But applications remain small-scale and underfunded.
  • Scaling peace tech requires shifting priorities — not just better tools.

Conclusion – #AIforGood or #AIforEconomy?

The scale of AI deployment in 2025 reveals an unmistakable imbalance: Economic applications overwhelmingly dominate both investment and implementation. The data presents a sobering reality: while AI is expanding rapidly across sectors, its deployment continues to prioritise economic optimisation — not public transformation.

In short: AI’s economic potential is being scaled. Its social promise remains in pilot mode. This isn’t the result of good intentions gone wrong — it’s a matter of structural priorities.

Artificial intelligence is undeniably improving efficiency and enabling innovation across manufacturing, healthcare, finance, and public services. But these benefits are far from evenly distributed, varying dramatically by sector, geography, and access to infrastructure. At the same time the technology restructures labor markets, risking deeper divides between early adopters and those left behind. Despite widespread “#AIForGood” narratives, implementation follows profitability rather than societal need.

That doesn’t mean AI can’t serve the public good.
It means it currently isn’t.

It is important to note that long-term societal benefits take time to materialise. One common assumption is that economic deployment lays the groundwork for future social applications — and that early productivity gains will eventually translate into broader welfare improvements. These arguments deserve consideration. However, relying on benefits to “trickle down” from commercial to public domains assumes a natural progression that history has rarely delivered without deliberate intervention.

Without sufficient resources, regulatory frameworks, and political will to support #AIForGood initiatives, these efforts struggle to move beyond rhetoric — leaving the social promise of AI unfulfilled.

And while this analysis has focused primarily on macro-level economic structures, we must also take into consideration the micro-level transformations occurring in our lived experience. AI is fundamentally altering our social ecology: from patterns of interpersonal communication to the mediation of emotional support. Further, the proliferation of algorithmically generated content is reconfiguring our information ecosystem, leading to unprecedented challenges in distinguishing authentic discourse from synthetic artifacts. This saturation of digital spaces threatens to undermine shared epistemological foundations while amplifying information asymmetries. Beyond reorganizing economic systems, these technologies are reshaping our fundamental modes of relation, support, and belonging—changing not just what we know, but how we come to know it.

If we want AI to actually serve people — not just platforms and companies — we need to start asking deeper questions: Not just what kind of technology we want, but what kind of society we want it to help build.

It’s time to rethink the future we’re building.

References

  1. Bick, A., Blandin, A., & Deming, D. (2025, February 27). The Impact of Generative AI on Work Productivity. Stlouisfed.org; Federal Reserve Bank of St. Louis. https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity
  2. Solow, B., Bosworth, T., Hall, J., & Washington. (1995). Understanding the contribution of Information Technology relative to other factors McKinsey Global Institute With assistance from our Advisory Committee. https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Americas/US%20productivity%20growth%201995%202000/usprod.pdfy
  3. Mayer, H., Yee, L., Chui, M., & Roberts, R. (2025). Superagency in the Workplace. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
  4. Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2025). Artificial Intelligence Index Report 2025. https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf
  5. EY Americas. (2025, May 14). EY survey reveals that technology companies are setting the pace of agentic AI – will others follow suit? Ey.com; EY. https://www.ey.com/en_us/newsroom/2025/05/ey-survey-reveals-that-technology-companies-are-setting-the-pace-of-agentic-ai-will-others-follow-suit
  6. Businesswire. (2025, May 7). Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation. https://www.businesswire.com/news/home/20250407539812/en/Stanford-HAIs-2025-AI-Index-Reveals-Record-Growth-in-AI-Capabilities-Investment-and-Regulation
  7. Powell, A. (2025, March 20). Machine Healing. Harvard Gazette. https://news.harvard.edu/gazette/story/2025/03/how-ai-is-transforming-medicine-healthcare/
  8. Mesko, B. (2023, December 12). The Current State Of FDA-Approved AI-Enabled Medical Devices. The Medical Futurist. https://medicalfuturist.com/the-current-state-of-fda-approved-ai-based-medical-devices/
  9. Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News; Massachusetts Institute of Technology. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
  10. Nicoletti, L., Ma, M., & Bass, D. (2025, May 8). AI Is Draining Water From Areas That Need It Most. https://www.bloomberg.com/graphics/2025-ai-impacts-data-centers-water-data/
  11. Law Commission Of Ontario (LCO), & Canadian Human Rights Commission (CHRC). (2025). Human Rights AI Impact Assessment Backgrounder. https://www.lco-cdo.org/wp-content/uploads/2025/03/LCO-HRIA-backgrounder.pdf
  12. University of Birmingham. (2025). A new kind of peacemaker: AI joins the front lines of diplomacy – University of Birmingham. University of Birmingham. https://www.birmingham.ac.uk/news/2025/a-new-kind-of-peacemaker-ai-joins-the-front-lines-of-diplomacy
  13. Tchioffo Kodjo. (2025). Press Statement of the 1264th Meeting of the Peace and Security Council, held on 11 March 2025, on the Situation in Sudan-African Union – Peace and Security Department. African Union,Peace and Security Department. https://www.peaceau.org/en/article/press-statement-of-the-1264th-meeting-of-the-peace-and-security-council-held-on-11-march-2025-on-the-situation-in-sudan

Join the Discourse

Your Opinion matters.

Share Your thoughts in the comments!

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Uncover More Insights on Digital Peace

Join our mailing list to get app updates and information