Deepfakes & Fraud 2026-04-03 · 7 min read

The AI Influencer Who Never Existed

An AI-generated woman in military uniform hit 1 million Instagram followers before the platform took the account down. She was never real. Her followers had no idea. This is the new face of synthetic social media.

Digital holographic face dissolving into AI code fragments against a dark background

In December 2025, an account for "Jessica Foster" appeared on Instagram. The profile showed a blonde woman in US military uniform: sitting on a bunk bed in barracks, walking a tarmac beside Donald Trump, posing in an office chair. The account linked to an OnlyFans page selling foot photos. By the time Instagram removed it in March 2026, Foster had accumulated over 1 million followers. She was entirely generated by AI.

Foster's account, documented by Fast Company and the Washington Post, is not a one-off. It represents a fast-growing industry where creators build AI personas, grow audiences using politically resonant aesthetics, and then monetise that following through subscription platforms, merchandise, or affiliate links. The combination of political imagery, parasocial appeal, and AI generation has proven remarkably effective at evading both platform moderation and human scepticism.

Why the Formula Works

The Jessica Foster case illustrates a specific playbook. Creators pick an archetype that generates strong tribal loyalty: military, patriot, religious, or nationalist. They generate consistent images of a fictional but believable person in that niche. The character never speaks, never does live video, never replies to comments. Followers fill the silence with their own projections.

Sam Gregory, executive director of Witness, an organisation that works on human rights and deceptive AI, put it plainly: generative AI has made it "trivially easy to generate a scene that looks pretty realistic and to place real individuals into scenes." For entirely fictional characters, the bar is even lower. There is no real person to contradict the story.

According to Purdue University's Governance and Responsible AI Lab (Grail), since the start of 2025 alone, researchers catalogued more than 1,000 English-language social media posts featuring fake images or videos of prominent political figures. In the eight years before that, the total count was 1,344. The volume has roughly doubled in 15 months.

Key stat: Grail's database recorded more AI political disinformation posts in the first 15 months of 2025 than in the previous eight years combined. Fake influencer accounts are part of the same surge, operating just below the threshold of what platforms currently monitor.

How Platforms Are Responding (and Failing)

Instagram removed the Foster account, but only after it had already reached a million followers and driven significant traffic to paid platforms. Platform moderation is reactive by design: accounts get flagged, reviewed, and removed after the damage is done. By the time a synthetic persona is taken down, the creator has often already exported the audience to Telegram groups, email lists, or alternative platforms.

TikTok, Instagram, and YouTube all require creators to label AI-generated content, but enforcement relies heavily on self-disclosure. A creator building a fake persona has no incentive to disclose. Detection tools that platforms use tend to flag AI-generated video better than static images, so still-image-only accounts remain a particular blind spot.

The absence of a live video track record is one of the clearest structural tells. Real influencers with a million followers almost always have stories, reels, or live sessions in their history. An account that stays exclusively in polished stills, never engages in real-time, and never has any candid behind-the-scenes content is worth scrutinising carefully.

What a Fake Influencer Account Actually Looks Like

Once you know what to look for, the patterns are recognisable across platforms and niches. AI-generated influencer accounts tend to share several characteristics:

  • No live or video content. Still images only, often in high-polish editorial style. The lighting and composition are too consistent for a real person's camera roll.
  • No replies to comments. Followers ask questions the persona never answers. Real influencers at any level of following respond at least occasionally.
  • Slight facial inconsistencies across photos. AI generation can maintain a consistent look but rarely nails exact facial geometry from post to post. Compare the ear shape, the eye spacing, and the hairline across multiple images.
  • Suspicious backstory gaps. The account has no tagged photos from friends, no crossover with real-world events, and no verifiable employment or location history.
  • Aggressive monetisation from day one. A link to OnlyFans, Patreon, or a Telegram group appears within the first few posts. Real creators typically build an audience before pitching paid content.

Beyond Fraud: The Political Dimension

Not every AI influencer account is built purely to make money. Researchers at Grail and Witness have documented accounts that use fictional personas to promote specific political positions. The Foster account did both simultaneously: it profited from the audience while reinforcing a particular image of political identity.

Daniel Schiff, assistant professor of technology policy at Purdue and co-director of Grail, described the effect in March 2026: "We are blending the lines between political cartoons and reality. A lot of people feel like these images or videos, or the stories they convey, feel true." The emotional resonance persists even when a viewer intellectually knows the content is synthetic.

This is particularly relevant for elections. Fake influencer accounts can normalise a narrative, drive traffic to real campaign content, or simply pollute the information environment enough that people stop trusting anything they see. The harm is not always a single viral lie. Sometimes it is a steady drip of synthetic content that quietly shifts what feels normal.

What to Do When You Suspect a Fake Account

Reverse image search is still the fastest first check. Tools like Google Lens and TinEye can surface whether a profile photo has appeared elsewhere under a different name. If the image returns no results at all, that is itself suspicious for an account claiming to be a public figure with a large following.

For individual images shared from these accounts, AI detection tools can flag synthetic generation artifacts that the naked eye misses: texture inconsistencies, background geometry errors, and lighting that does not match across the frame. Apps built specifically for this analysis can give you a confidence score in seconds.

Report accounts you believe are synthetic to the platform directly. Most platforms have an "impersonation" or "fake account" category in their reporting flow. In the EU, the Digital Services Act now requires platforms to act on flagged content within 24 hours for systemic risks. In India, the IT Rules 2026 mandate a 3-hour takedown window for deepfakes once reported.

Quick check: Before you follow or share content from an account you just discovered, spend 60 seconds on three things: look for a live video, try a reverse image search on their profile photo, and check whether they have ever replied to a comment. If all three come up empty, you are probably looking at a synthetic persona.

The Jessica Foster account grew to a million followers before anyone stopped it. Platforms will keep improving their detection, but right now the gap between what AI can generate and what moderation systems can catch is wide enough to build an entire audience in. Knowing what to look for is still your best defence.

If you come across an image or account that looks suspicious, FakeOut can run an AI detection check in seconds. It is free on Android, with iOS beta in development. Run the image, check the score, and trust your instincts if something feels off.