Deepfakes February 25, 2026 · 6 min read

The AI Videos Lying About London

A young man from north-west England has never visited Croydon. He has made dozens of AI-generated videos that convinced millions of people the south London borough is a warzone. The BBC tracked him down. His answer for why he does it should make you nervous about everything you see online.

AI-generated deepfake videos about London — how to spot them

Eight Million Views in One Day

The video shows a fictional "taxpayer-funded aquarium" in Croydon, crowded with young men in balaclavas and padded jackets. None of it is real. The aquarium does not exist. The people are AI-generated. But the video spread to eight million views in a single day on TikTok, with comment sections full of outrage from viewers who treated it as documentary footage.

When the BBC investigated the account behind the trend, handle RadialB, they found a man in his 20s from north-west England who has never actually been to Croydon. He produces AI-generated videos depicting fictional scenes of urban decay across UK cities. Fake parks. Fake council estates. Fake institutions taken over by crime. And his reasoning for making them sound convincing? "If people saw it and immediately knew it was fake, they would just scroll."

That sentence is the entire problem with AI-generated disinformation in 2026. Virality requires believability. And AI tools have made believability cheap.

YouGov data from January 2026 shows a majority of Britons now believe London is unsafe. But only a third of people who actually live in London agree, and 81% say their own local area is safe. The gap between national perception and lived reality tracks almost exactly with how far these AI videos have spread.

A Genre With a Name

Researchers and journalists have started calling this "decline porn" — content designed to make Western cities look overrun with crime and immigration. The genre existed before AI, using real footage stripped of context. But AI has removed the last remaining constraint: you no longer need real footage at all.

RadialB's TikTok account was banned for graphic content. He built a new one and kept posting. Copycat accounts have since emerged from users in Israel, Brazil, and multiple accounts based in the Middle East, all producing similar content about London's supposed collapse. The content crosses language barriers because the visuals carry the message without needing captions.

Some platforms add small "AI-generated" labels to flagged content. These labels exist on some of RadialB's videos. They are clearly not working. The labels appear as a line of small text that most viewers skip past in the first half-second of a video.

The Wider Deepfake Surge

The Croydon videos are one visible slice of a much larger problem. On February 25, 2026, cybersecurity researchers published fresh warnings about AI-generated content being used for financial fraud and identity theft at scale. Konstantin Levinzon, co-founder of Planet VPN, described the current moment bluntly: "The internet is flooded with fake images and videos that can be created in seconds with low-cost or even free tools."

Also this week, the European Commission opened an investigation into Musk's platform X over sexualized deepfake images generated by its Grok chatbot. Filmmaker Ruairi Robinson shared a video of Tom Cruise and Brad Pitt apparently fighting in a filmed scene — generated from a two-line text prompt. It went viral before most viewers realized it was AI.

These incidents share a common thread: the content is convincing enough that the initial viral spread happens before debunking catches up.

What to Actually Look For

AI video generation has improved fast, but it still leaves traces. Here are the most reliable tells in 2026:

  • Blinking patterns. Real people blink every 2 to 10 seconds, at random intervals. AI-generated faces frequently stare for unnatural lengths of time, or blink in rhythmic, machine-like patterns. Watch a face for 10 seconds and count.
  • Hair and skin boundaries. Where hair meets skin, or where faces meet background, AI tools still blur or smear edges at high magnification. Pause and zoom in on any face near the edges.
  • Teeth and hands. Fingers and teeth remain the hardest elements for generative models to render correctly. Too many fingers, oddly shaped teeth, or hands that don't match body proportions are strong red flags.
  • Background inconsistency. In AI-generated crowd scenes, look at background figures. They often repeat, have mismatched proportions, or move in ways that don't match physics. RadialB's videos show crowds where background figures clip through each other.
  • The verifiability test. Ask: does this specific building, institution, or event actually exist? A few seconds on Google Maps or a quick search for the location name will kill most fabricated "decline porn" videos before they can affect your view of a place.

Platform Labels Are Not Enough

Both TikTok and Instagram have policies requiring creators to label AI-generated content. RadialB's own account demonstrates the gap between policy and practice: the labels exist, he kept posting after a ban, and the content kept spreading. Copycat accounts in other countries face no enforcement at all.

For UK policymakers, the Online Safety Act created obligations around harmful content, but AI-generated disinformation that isn't obviously illegal sits in a grey area most platforms haven't resolved. The speed of spread, eight million views in one day, outpaces any human moderation system.

The realistic answer, for now, is individual skepticism backed by fast tools. When a video claims to show something outrageous happening in a specific city, pause before sharing. Check whether the location is real. Look at the faces for the tells above. Reverse image search a frame if something feels off.

FakeOut can run AI detection on suspicious images and videos in seconds, flagging synthetic faces, inconsistent metadata, and generation artifacts, which is exactly the kind of fast check that catches content like RadialB's before it shapes what you think you know about a place. Free on Android, with an iOS beta currently in development.