News & Incidents 2026-04-06 · 7 min read

AI Robocalls in Elections: What Happened in New Hampshire and What Comes Next

In January 2024, thousands of New Hampshire voters got an automated call that sounded like President Biden. The voice told them not to vote in the primary, to save their vote for November. It wasn't Biden. Someone had cloned his voice using AI and used it to suppress votes. That's not a hypothetical scenario. It already happened.

An abstract, cinematic image depicting a glowing phone receiver emitting distorted, teal-colored sound waves and fragmented digital electoral symbols, against a deep dark background.

Election misinformation used to mean doctored photos or fake news articles. Now it includes cloned voices that can sound like almost anyone, and most people can't tell the difference on a phone call. The Brennan Center for Justice flagged the New Hampshire robocalls in a March 2025 report as an early example of how AI voice tools can be used as political weapons.

The New Hampshire Incident: What Actually Happened

During the New Hampshire primary, thousands of voters got automated calls featuring an AI-generated version of President Biden's voice. The message told them to skip the primary and save their vote for the general election. It was voter suppression using a cloned voice. What made it notable was how easy it was to pull off: AI voice synthesis tools are widely available online, and the whole operation needed little more than that and a burner phone.

How Deepfake Audio Actually Works

Deepfake audio uses machine learning models trained on recordings of real people. The model learns someone's vocal patterns, tone, and speech rhythms, then generates new audio in their voice. The output can be convincing enough to fool someone on a phone call. The same technology has legitimate uses, like accessibility tools and voice restoration. But when someone uses it to put words in a politician's mouth, the results can affect how people vote.

The Global Reach of Digital Deception

The New Hampshire case got attention in the US, but AI-generated audio is being used in elections globally. India, Southeast Asia, the Middle East, Africa, Latin America, and Europe have all seen political actors experiment with manipulated media. Deepfake audio is harder to spot than doctored images or video. People tend to trust what they hear, especially a familiar voice on a phone call.

What's actually at stake: AI robocalls don't need to fool everyone to have an effect. If they make even a small number of people doubt what's real, or sit out an election, the impact can be decisive. When voters can't tell real from fake, trust in public information breaks down. That's a problem for any election system.

Fighting Back: Detection and Verification

Detecting AI-generated audio requires specialized tools. Researchers have built AI models trained to spot the subtle artifacts in synthetic speech, and forensic analysis can sometimes find fingerprints left by generative AI. Fact-checkers like BOOM Live, AltNews, AFP Fact Check Asia, Snopes, and PolitiFact can verify viral content, but they're usually playing catch-up. Proactive detection is what's actually needed.

  • Education: Public awareness campaigns are essential to inform voters about the existence and dangers of deepfake audio.
  • Platform Responsibility: Social media platforms and telecommunications providers must implement stricter policies and develop better mechanisms for identifying and removing AI-generated misinformation.
  • Legislative Action: Governments worldwide are grappling with how to regulate AI misuse in elections, with discussions around the EU AI Act, US bills, and India's emerging regulations. Clear legal frameworks are needed to deter bad actors and hold them accountable.

There's no single fix here. Detection technology, media literacy, and policy all need to move together. FakeOut is built to help people detect AI-generated images and, as the product develops, audio as well. It's free on Android, with an iOS beta in progress. Understanding how these tools work is the first step. Using them is the second.