How to Navigate the Crisis of Synthetic Reality

When the AI-generated lie becomes indistinguishable from the truth, our reality fractures. The emotional trust we place in our own eyes and ears is eroded, leaving us with a deeply unsettling sense of vulnerability.

My First Encounter with Synthetic Credibility

It began as if it was any ordinary afternoon. After a long day of work, traffic jams, and routine tasks, I surrendered to that daily, almost hypnotic ritual that offers a few moments of solitude while scrolling through social media. The screen’s glow lit my face as my thumb glided down the Instagram feed. Like many, I do it to “stay informed,” to sense the pulse of what’s trending, although I always promise myself not to lose too much time to the endless flow of reels and stories.

The stream of images and clips had become a familiar current of noise that rarely broke the monotony. I followed a few influencers, the account of a university where I collaborate, and a doctor who lives in Geneva. It was the digital equivalent of background music, until, suddenly, one video froze my scrolling thumb.

The perspective was intimate: filmed from the seat of a student in a large lecture hall. At the front stood a visibly furious professor, voice cracking as he shouted at his students. He was sick, he said, of reading papers generated by ChatGPT. He had just graded twenty of them, being identical, soulless copies of a machine’s template. His anger escalated until it tipped into what looked unmistakably like a nervous breakdown. The trembling hands, the desperate tone, the exhaustion behind the rage, absolutely everything felt painfully real.

I was stunned, and I believed every second of it. My instinct was to save the video, to share it with my own reflections later. The scene struck a chord with me. My professional interests keep me in close contact with universities, and I’ve heard professors voice the same despair, bewildered by the challenge of detecting student cheating in the era of generative AI. They don’t know where the line is anymore, or how to redraw it, how to reform the whole assessment process. 1

So, when I saw that professor collapse under the weight of his frustration, my mind didn’t question it.

I saw it as the visible manifestation of an anxiety I already knew existed.

When Reality Crumbles

A few moments later, I began reading the comments, and my certainty shattered. It turned out the video was created with Sora 2 2, the latest in a new generation of text-to-video models whose hyperrealistic creations have flooded the internet, from a dancing fake Jacky Chan to babushkas living with their pet hippos.

So, I replayed it in my head, analyzing every gesture, every sound, and suddenly the truth hit me: I had just been deceived. And I, who considered myself fairly knowledgeable about deepfakes, hadn’t noticed a thing.

For the first time, I couldn’t tell the difference between a synthetic video and a real one. It was a genuine cognitive fracture. The trust I placed in my own perception, a trust I didn’t even realize I relied on, was gone in an instant. Technology had not only crossed the line between illusion and reality; it had erased it entirely. The barrier I thought I could always detect had dissolved right on my own screen, leaving me exposed to a new kind of manipulation I wasn’t prepared for.

That realization unsettled me. If I could be fooled so completely, what about everyone else?

Beyond the Deception: The Deepfake-as-a-Service (DFaaS) Threat

Once you’ve been fooled, curiosity becomes self-defense. I began to read more, hoping information would rebuild the trust that illusion had stolen.

Not long after, I came across a news story about a Chinese company called Haotian AI 3.

Their advertising proudly claimed that it was now impossible to distinguish their deepfakes from real videos. The small imperfections that once betrayed forgery, awkward head movements, strange lighting, mismatched lips, had been perfected away. The simulation had reached a frightening level of precision.

This platform, operating through Telegram, known for its lenient moderation, offers high-quality deepfakes for a few thousand dollars. Who are their clients? Criminal “pig butchering” fraudsters operating worldwide from Nigeria to Myanmar, who create long-term romantic or investment scams that end with massive cryptocurrency thefts. The metaphor is disturbingly apt: victims are “fattened” with trust before being financially “slaughtered”.

Reading about it, I felt the same chill that had crawled up my spine during the professor’s meltdown. The simulation wasn’t a glitch anymore, it had become a real industry with its rules, target audience and marketing tricks.

The simulation wasn’t a glitch anymore, it had become a real industry with its rules, target audience and marketing tricks, all operating within the new rules of Synthetic Reality.

As I explored further, I realised Telegram was filled with channels flaunting celebrity deepfakes as entertainment, and “undressing bots” that could strip anyone bare with a single click.

What once required advanced Photoshop skills now takes seconds.

Dark Web Social Engineering Tools

As for hackers, their life seems much easier now: new Dark Web tools like FraudGPT and WormGPT provide unfiltered content-hacking guides, malware code, fraud techniques, turning even script kiddies 4 into sophisticated social engineers.

Even Europol has begun to sound the alarm: cloned voices, fabricated CEOs, and AI-scripted romances are draining millions from companies and hearts alike 5.

One Japanese company lost $35 million after the voice of a company director was cloned and used to pull off an elaborate fraud in 2020 6.

Europol further predicts that romance scams will increase, accelerated by AI tools that enhance deception through voice cloning, video generation, and real-time translation.

For example, there was a real case where a fraudster, with whom the victim believed she was in a legitimate two-year relationship, used deep fake technology during video calls to steal £350,000 from her. The scammer, who met the victim on a dating website, had even proposed using a photo which had been digitally altered showing a man holding a sign that read: “Will you marry me?”.

Sounds surrealist, but it happened.

It’s easy to see this as a terrifying omen. But amid my unease, another thought surfaced:

What if this isn’t only a threat?

What if this era of Synthetic Reality isn’t only a threat, but a signal, an invitation for humanity to evolve?

Maybe our antidote to synthetic credibility isn’t panic, but growth. Maybe it is a call to develop a new literacy: the ability to see through digital illusion, to question even what looks unquestionably real.

The Citizen’s Responsibility

While the European Union has developed a comprehensive regulatory basis through the AI Act 7 and the DSA8, mandating transparency from developers and accountability from platforms, these legislative structures ultimately create only the framework for safety. The final, irreducible line of defense against the current deluge of synthetic fraud does not rest with lawmakers or tech giants—it rests squarely with us, the citizens.

The reason is simple: AI-enabled fraud is a problem of social engineering, not technical failure. The laws compel disclosure, but they cannot compel belief or skepticism. The fraudster’s goal is to bypass our critical judgment by exploiting our emotions (fear, greed, or empathy), as demonstrated by the professor deepfake or the manipulative Pig Butchering schemes.

So digital literacy shall form a part of digital resilience of citizens. It is the skill we now need to survive this new information landscape. And it has nothing to do with fears or withdrawal.

Being digitally resilient means to be aware, attentive and adaptable. It’s when skepticism turns into strength against the overwhelming challenge of Synthetic Reality.

To me, digital literacy rests on three essential pillars:

  1. Critical Thinking: is the refusal to accept information passively. It means analyzing, questioning, and contextualizing before believing. As Bertrand Russell once wrote, “it’s healthy to put a sign of interrogation over those things that for a long time have been given as certain.”
  2. Cognitive Scrutiny: is a discipline of the mind, training ourselves to spot manipulation, resist cognitive shortcuts, and delay judgment until evidence aligns with intuition.
  3. Human Reaffirmation: is the recognition that what defines us is not the technology we build, but our capacity to feel authentically and reason deliberately.

We are not Homo Sapiens any longer, we are Homo Resistens, capable of navigating complexity without losing our humanity.

A Call for Intentional Criticality

That night, the fake professor on my screen became something more than a glitch in my feed. He became a reminder that critical attitude must now become an intentional practice.

I felt a strange hope rise within me. A sort of a light bulb emoji, faint but sufficient to keep moving forward.

Because perhaps, in this age of Synthetic Reality, the faint lights are the ones that matter most.

References

  1. Desai, H. (2025). What’s worth measuring? The future of assessment in the AI age. Unesco.org; UNESCO. https://www.unesco.org/en/articles/whats-worth-measuring-future-assessment-ai-age
  2. Wikipedia contributors. (2025, November 12). Sora (text-to-video model). Wikipedia. https://en.wikipedia.org/wiki/Sora_(text-to-video_model)
  3. Tehtris. (n.d.). Deepfake-as-a-service-threat-intelligence-report. In https://www.datensicherheit.de/. https://www.datensicherheit.de/wp-content/uploads/tehtris-deepfake-as-a-service-threat-intelligence-report.pdf
  4. Wikipedia contributors. (2025a, October 31). Script kiddie. Wikipedia. https://en.wikipedia.org/wiki/Script_kiddie
  5. Europol. (2022). The changing DNA of serious and organised crime | Europol. Europol. https://www.europol.europa.eu/publication-events/main-reports/changing-dna-of-serious-and-organised-crime
  6. Brewster, T. (2023, May 2). Fraudsters cloned company director’s voice in $35 million heist, police find. Forbes. https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/
  7. EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. (n.d.). https://artificialintelligenceact.eu/
  8. Wikipedia contributors. (2025b, November 4). Digital Services Act. Wikipedia. https://en.wikipedia.org/wiki/Digital_Services_Act
Picture of Almira Zainutdinova

Almira Zainutdinova

Almira Zainutdinova is an AI Ethicist writing for Meer and collaborating with Digital Peace as an Expert on Digital Impact. With academic trajectory and experience in Engineering, her work focuses on how technology can foster our communication, intercultural understanding, and peace in the digital era.

Join the Discourse

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Your Monthly Brief on Technology, Power & Peace

Technology reshapes conflicts, democracy and humanity in real-time. Are you tracking its impact?

Start tracking technology’s impact on peace and democracy.

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.