How-To Guide March 26, 2026 · 7 min read

How to Spot AI-Generated Images and Deepfakes in 2026

AI image generators have improved so fast that many fakes now fool journalists, voters, and even trained fact-checkers on first glance. Here is a practical checklist you can run through in under two minutes, whether you are scrolling WhatsApp in Mumbai or Twitter in Oslo.

Abstract illustration of a fractured digital face dissolving into teal data streams on a dark background

During India's 2024 general election, the country's Deepfakes Analysis Unit flagged hundreds of synthetic videos circulating on WhatsApp and Telegram. Some showed politicians making statements they never made. Others recycled genuine footage with AI-swapped faces. Voters in states with limited fact-checking access had almost no way to verify what was real. That problem has not gone away. If anything, the tools to make convincing fakes have become cheaper and faster while detection literacy among ordinary users has barely moved.

The good news: AI-generated content still leaves traces. Generators trained on billions of images have characteristic failure modes, and knowing what to look for puts you ahead of most casual viewers.

Start with the obvious: read the context

Before studying a single pixel, ask yourself three questions. Where did this image come from? Who is sharing it and why? Does the claim it supports seem too convenient or emotionally loaded? The Reuters Institute's research on Indian media consumption found that highly partisan content travels fastest precisely because it triggers strong emotional responses. That emotional spike is often the first signal something is off.

Run a reverse image search using Google Images or TinEye. Paste the URL or upload a cropped version of the image. If the original source shows it is from a different event, a different country, or a different year, you have found your answer without any AI analysis at all. A large share of viral "AI fakes" are actually genuine photos stripped of their original context.

Visual cues that generators still get wrong

Modern diffusion models from Stable Diffusion, Midjourney, DALL-E 3, and Firefly have improved dramatically, but they still struggle with specific details:

  • Hands and fingers: Count them. AI models notoriously generate six fingers, fused knuckles, or hands that taper unnaturally at the wrist. This has improved with newer models but remains one of the most reliable tells on quickly generated content.
  • Text inside images: Logos, signs, T-shirt prints, and newspaper headlines rendered inside AI images typically contain garbled or nonsense characters. Real photographs capture text accurately.
  • Ears and jewellery: Earrings often appear asymmetric or fused to the earlobe. The curvature of ear cartilage is frequently wrong in face-swap deepfakes.
  • Background coherence: Zoom into the edges where a person meets the background. AI compositing often produces a subtle halo or blurred fringe, especially around hair.
  • Lighting consistency: Check whether shadows on the face match the light direction suggested by the rest of the scene. Face-swap models frequently paste a face lit from one direction onto a body lit from another.

Quick test: On a smartphone, zoom in to 300% on the eyes and teeth. AI face generators often produce teeth that look slightly too uniform, or irises with symmetrical patterns that do not match the randomness of real human eyes. In deepfake videos, watch the eye blink rate. Early models rarely blinked; newer ones overcorrect and blink too often or at irregular intervals.

Tools you can use right now

Several free and low-cost detection tools are available to anyone with a browser:

  • Hive Moderation: Hive's deepfake detection model analyzes images and video frames, labeling faces as deepfake or authentic. It works on both static images and uploaded video clips.
  • Deepware Scanner: Specifically designed for video, Deepware scans frames for face manipulation and gives a probability score. Useful when you receive a suspicious clip.
  • MIT Media Lab's DetectFakes experiment: Run your own intuition training at media.mit.edu/projects/detect-fakes. It shows you real and AI-generated face pairs and helps calibrate your eye over time.
  • Content Credentials (C2PA): Adobe, Google, Microsoft, and the BBC are rolling out a standard called C2PA that cryptographically signs images at the point of capture. When an image carries a Content Credentials badge, you can verify exactly which camera captured it and whether it was edited. The C2PA specification version 2.2, published in May 2025, expanded support to video and audio. Look for the "cr" icon on platforms that have adopted the standard.

What about AI-generated video?

Video deepfakes add a time dimension to all the image problems above. Watch for unnatural head movement when a person is speaking. Real speakers move their head continuously in small, variable ways. Early deepfake models produce a subtle "bobblehead" effect where the face seems slightly detached from the neck. Also check mouth sync: even a 100ms delay between lip movement and audio is perceptible once you know to look for it.

During the 2024 Indian election cycle, BBC reporters identified deepfake campaign videos by slowing playback to 0.5x speed in YouTube's player. Compression artifacts around the mouth area became obvious at half speed, even on videos that looked convincing at normal playback.

When tools are not enough

Detection tools give probabilities, not verdicts. A tool saying "70% likely AI-generated" means there is meaningful uncertainty. When the stakes are high, cross-check with fact-checking organizations. BOOM Live and AltNews cover Indian-origin viral content in depth. AFP Fact Check Asia covers Southeast Asia. Snopes and PolitiFact handle English-language international claims. These teams have journalists who do source verification that no automated tool can replicate.

The broader lesson from India's experience is that detection is a habit, not a one-time check. Most people who spread misinformation do so because they never paused to question the image at all. Slowing down for thirty seconds is often enough.

For a faster mobile workflow, FakeOut combines AI image analysis with reverse search in a single tap. Free on Android, with iOS beta in development. It is designed for exactly the kind of quick verification check described above, whether you are fact-checking a WhatsApp forward or a news article screenshot.