What Are Content Credentials? The AI Watermarking Standard Coming to Your Feed in 2026
A new industry standard promises to label AI-generated images before they reach you. The EU AI Act mandates it by August 2026. But the biggest platforms are already stripping it out, and most people have no idea this is happening.
On August 2, 2026, Article 50 of the EU AI Act takes full effect. Among other requirements, it forces any company offering an AI system that produces synthetic images, video, or audio to mark those outputs in a machine-detectable way. Not just a visible label on screen. An actual embedded, cryptographic signal that software can read.
The industry's answer to that requirement is called C2PA, short for the Coalition for Content Provenance and Authenticity. Adobe, Microsoft, Google, Meta, and OpenAI all back it. DALL-E 3 supports it. Microsoft Paint's AI Image Creator now adds optional C2PA manifests. ByteDance's Seedance 2.0, used in CapCut's global launch this April, ships with C2PA watermarking built in.
So what is it, exactly? And why is it not enough?
How C2PA Works
C2PA embeds a signed cryptographic manifest into an image or video file at the moment of creation. That manifest records who made the file, which tool was used, whether any AI was involved, and whether the file has been edited since. Think of it as a tamper-evident envelope wrapped around the content itself.
When you open a file with a C2PA validator (the Content Authenticity Initiative has a free one at contentauthenticity.org), it checks the cryptographic signature and displays the provenance chain. If the file has not been altered and the signing certificate is valid, it shows as verified. If someone has modified the image after signing, the signature breaks and the validator flags it.
The EU's draft Code of Practice on marking AI-generated content, published in its second version on March 3, 2026, explicitly points to this kind of latent embedded disclosure as the preferred approach. The final version is expected in June 2026.
The Platform Problem
Instagram, X (Twitter), and WhatsApp all strip C2PA metadata when you upload an image. The credential that proves the image is AI-generated survives creation, survives download, and then disappears the moment it hits the platform that billions of people actually use.
This is not a bug. It is a side effect of how these platforms handle uploads. When you post an image, they recompress and resize it for delivery. That recompression process removes embedded metadata, including C2PA manifests. LinkedIn and TikTok currently preserve credentials. Instagram and X do not.
This creates a specific failure mode: a validator looking at an image on X will see no credentials and show "unverified." That does not mean the image is authentic. It may mean it is an AI-generated image that lost its label in transit. The absence of a C2PA credential proves nothing about whether the image is real.
As of March 2026, researchers testing C2PA in the wild found that images carrying valid provenance metadata are still rare on the open web. The standard exists, but deployment is early and uneven.
Google's Different Approach: SynthID
Google takes a different approach with SynthID. Instead of embedding metadata that gets stripped by re-encoding, SynthID modifies the image's actual pixel values using neural network transformations. The watermark is invisible to the human eye but detectable by the SynthID classifier.
Because the signal is in the pixels themselves, standard editing and re-uploading workflows do not remove it the way they strip metadata. Google is rolling SynthID out across images, audio, video, and text, and has joined C2PA to make both approaches work together.
SynthID is not a complete answer either. Its detection is probabilistic, not binary, producing a confidence score rather than a definitive verdict. And it currently only covers images generated by Google's own tools. A deepfake produced with another generator will not carry a SynthID watermark.
What "No Watermark" Actually Means
Here is the practical reality in April 2026: most AI images shared on social media arrive at viewers with no credentials attached. Some never had any. Some had them and lost them on upload. A small and growing number were generated by tools that never supported either standard.
- →No C2PA manifest does not mean the image is real.
- →A valid C2PA credential means the image's origin is traceable, not that its content is truthful.
- →Subjective judgments ("this looks AI-generated") are unreliable and getting worse as generators improve.
This is the gap that visual forensics tools fill. Provenance standards tell you where an image came from, if the credential survived. Detection tools analyze the image itself for artifacts that AI generators leave behind: frequency anomalies, texture inconsistencies, lighting errors. These two approaches are not in competition. They work on different signals.
What Happens After August 2026
The EU AI Act deadline is the first major regulatory forcing function. Companies that deploy AI image generators and serve EU users will need a machine-detectable disclosure mechanism. C2PA is the most likely compliance path for most of them.
The US has no federal equivalent yet, though several state bills are active. India's IT Ministry has discussed AI labeling requirements but has not passed binding law as of this writing. Globally, the pattern is the same: standards exist, mandates are coming, and deployment is far behind both.
For the next few years, the practical situation is that watermarking will be present on some images but absent on many, and absent for reasons that have nothing to do with whether the image is authentic. Reading that signal correctly requires understanding what it does and does not tell you.
FakeOut analyzes images using visual forensics independent of metadata, which means it works whether a C2PA credential survived the upload or not. It is free on Android, with iOS beta in development. In an environment where the label can be stripped away before you even see the image, checking the image itself is the only reliable option.