Marketing · Brand Strategy

78% of Brands Use AI Content Without Disclosing It. That’s About to Become a Real Problem.

Consumer trust data, new IAB guidelines, and a growing awareness gap mean the “we’ll figure it out later” approach to AI disclosure has a shorter runway than most marketing teams realize.

Hina Mian

By Hina Mian , Co-Founder of Future Factors AI

Share This Article
78%Brands Using Undisclosed AI
32%Would Trust Brand Less
90%Want AI Image Transparency
Jan 2026IAB Framework Released
TL;DR

78% of brands use AI to create content but rarely disclose it. A 2026 global study found 32% of consumers would trust a brand less if they knew, while 59% cite non-disclosure as a direct trust-breaker. The IAB released its first AI Transparency Framework in January 2026. This article gives you the data, the regulatory context, and a practical disclosure policy you can implement this week before it becomes a crisis to manage reactively.

The disclosure data every marketer should see

Let me share the stat that made me pause. A study from the World Federation of Advertisers (WFA), covering brands across multiple markets in 2026, found that 78% of brands use AI to create marketing content but rarely or never disclose it to their audiences. [1]

Seventy-eight percent. That is not a fringe phenomenon. That is almost every brand with an active content operation. And the majority of those brands are making the same implicit bet: that their audience either cannot tell, does not care, or will not find out.

Here is why that bet is getting riskier by the month. A 2026 global survey of 9,869 adults across seven countries (Australia, Canada, France, Germany, Singapore, the UK, and the United States) found that 32% of consumers would trust a brand less if they found out its content was AI-generated. [2] Only 15% said it would increase their trust.

That is a net negative of 17 percentage points. The trust deficit for undisclosed AI content is real, and it is larger than the gap for disclosed AI content.

The number that gets underreported: 59% of consumers in the same study cited “failure to disclose AI use” as a direct trust-breaker. Not just AI-generated content in general: specifically the failure to disclose it. Consumers are not necessarily opposed to AI in marketing. They are opposed to being misled. [2]

The uncomfortable question for every marketing team: If your audience knew exactly how much of your content involved AI, would they be surprised? If yes, you have a disclosure gap. And the way most consumers find out about disclosure gaps is not through voluntary announcements. It is through leaks, competitor campaigns, or investigative journalism.

What the IAB’s new framework actually requires

On January 15, 2026, the Interactive Advertising Bureau released what it calls the industry’s first AI Transparency and Disclosure Framework. This is not a government regulation (yet). It is an industry framework: a set of standards that IAB member companies (which includes most major advertisers, agencies, and media companies) are expected to follow. [3]

The framework uses a risk-based model. Not all AI content carries the same disclosure requirement. Here is the practical breakdown of what it covers:

Mandatory disclosure categories

The framework requires clear disclosure for:

  • Synthetic humans in video or image content. If your ad features an AI-generated person who looks like a real human, that needs to be disclosed.
  • Digital twins. If you create an AI replica of a real person (a spokesperson, celebrity, or even your own employees) to produce content, disclosure is required.
  • AI chatbots that could be mistaken for humans. Customer service bots, virtual assistants, and any AI interface that could be reasonably interpreted as a human interaction requires explicit disclosure.

Recommended (but not yet required) disclosure

For AI-assisted written content, AI-edited images, and AI-suggested copy, the framework recommends disclosure without mandating it. This is where most brands’ current AI use falls. And the direction of travel is clear: recommended today, required tomorrow.

The IAB framework is not the end of the regulatory story. The EU AI Act, which came into force in 2024 and expanded enforcement scope in 2026, has its own disclosure requirements, particularly around high-risk AI uses in advertising. If you operate in EU markets, you are dealing with a regulatory environment that is stricter than the IAB’s voluntary framework.

The trust math: why hiding AI use costs more than disclosing it

The instinct many brands have is: if 32% of people trust us less when they know we use AI, let us not tell them. That math looks right until you factor in the downside risk.

Getty Images published research in 2026 finding that nearly 90% of consumers globally want to know whether an image has been created using AI. [4] SmythOS research found that 73% of consumers can already spot (and reject) AI-generated marketing content. [5] The gap between “consumers who want transparency” and “brands providing it” is not sustainable.

The trust math actually works like this: voluntary, clear disclosure costs you some trust upfront (the 32% who are skeptical). But it builds long-term credibility with the 41% who are either neutral or positive about disclosure, and it protects you from the catastrophic trust hit of being caught hiding it.

The Yahoo and Publicis Media research adds an important nuance: AI disclosure in advertising specifically (not general content) was found to increase consumer trust when the disclosure was framed positively. “This ad was created using AI to personalize your experience” performed better than no disclosure at all. [6] Context and framing matter enormously.

The brands taking the biggest risk right now are the ones built on perceived authenticity: wellness brands, lifestyle brands, personal brand businesses. For those brands, the gap between perceived and actual content production is the biggest liability. If your audience has invested emotionally in your “authentic” content, finding out it was entirely AI-generated without disclosure is not a minor story. It is a brand crisis.

What transparency actually looks like in practice

Most brand teams resist disclosure because they are imagining something they are not actually required to do: a giant disclaimer that says “WARNING: THIS WAS WRITTEN BY A ROBOT.” That is not what effective AI disclosure looks like.

Effective disclosure is clear, proportionate, and integrated. Here are the formats that work:

Content footnotes

“Created with AI assistance, reviewed and edited by our team.” One sentence. At the bottom. This covers most written content and is low-friction to implement. Most readers will see it and move on. The ones who care about it will appreciate the transparency.

Image labels

A small “AI-generated” label on images (like the format Getty Images uses on their AI content) is standard practice in 2026. Any visual content created entirely by AI gets labeled. AI-edited content from a real photo can use “AI-enhanced” if the edit is significant.

Platform-level disclosure

Instagram and LinkedIn both allow content creators to flag posts as AI-generated. Use these built-in features. They satisfy the audience without requiring you to write custom disclaimers, and they are increasingly being noticed by audiences who look for them.

Brand-level policy statement

A one-paragraph statement on your website’s “about” or “content” page explaining how your team uses AI in content creation. Something like: “We use AI tools to research, draft, and edit some of our content. All content is reviewed, edited, and approved by a member of our team before publication.” This does not need to be prominent: it needs to exist.

Building an AI content disclosure policy for your brand

Here is a practical framework for getting a disclosure policy in place quickly, without it becoming a six-month legal review process.

Step 1: Audit your current AI use

List every place AI is currently involved in your content production: writing drafts, generating images, editing video, suggesting captions, personalising emails. Be honest with yourself about the list. You cannot disclose what you have not named.

Step 2: Categorise by risk level

Use the IAB framework as your guide. Synthetic humans and chatbots are high priority. Written content assistance is lower risk. Prioritise disclosures in the order of their potential to mislead your audience.

Step 3: Write your disclosure language

For each category, write a one or two sentence disclosure. Test it with two or three people outside your marketing team. Does it sound defensive? Does it feel like you are hiding something else? Adjust until it feels straightforward and honest, not like a legal disclaimer.

Step 4: Integrate it into your production workflow

Add the relevant disclosure to your content template so it is there by default for every piece of AI-assisted content. The goal is for disclosure to be automatic, not something that requires a decision each time.

One thing to avoid: Blanket, vague language like “We may use AI in some of our processes.” That satisfies almost no one. It reads as a cover-all disclaimer rather than genuine transparency. Be specific about what you actually do.

Step 5: Publish your policy

Put a brief AI content policy statement on your website. Link to it from your content or about page. This creates a permanent reference point you can point to and signals to your audience that you have thought about this deliberately, not reactively.

What the smartest brands are doing right now

The brands handling this best in 2026 are doing a few things consistently.

They are disclosing at the point of content, not just in policy documents. A content footnote is more effective than a website policy no one reads. The disclosure reaches the person consuming the content, not just the journalist who might write about you later.

They are framing AI as a production tool, not a replacement. “Our team uses AI to move faster and spend more time on strategy” lands better than “this was AI-generated.” The framing shifts from “we let AI do our job” to “we are using technology to do our job better.”

They are being specific, not general. “We used AI to generate the initial draft of this article, which was then edited and fact-checked by [Author Name]” is more trustworthy than “AI-assisted.” Specificity signals that you have actually thought about where AI is and is not appropriate, rather than slapping a disclaimer on everything.

They are building audience literacy alongside disclosure. Brands that actively talk about how AI fits into their creative process, what it does well, and what humans still control are building audiences who understand and trust the approach. This is a longer play, but it is more durable than disclosure-as-legal-cover.

How to use AI and still sound human

Transparency is one half of the equation. The other half is ensuring your AI-assisted content still sounds like you, not like a language model having a professional day.

The non-negotiable edit principle: never publish AI output without substantive human editing. Not spell-checking. Not adjusting one word. Substantive editing that adds your specific point of view, your specific examples, your actual opinions. If you cannot tell the difference between the AI version and the final version, you have not edited enough.

Three things AI cannot do that you can: take a position based on direct experience, reference a specific client situation or outcome (even anonymized), and say something that might be slightly wrong but is genuinely what you believe. All three of those make content feel human. Use them. You can also read our guide to maintaining your brand voice at scale with AI for a detailed framework on keeping AI-assisted content sounding genuinely like your brand.

The goal is not to hide that AI was involved. The goal is to ensure that what reaches your audience is genuinely shaped by human judgment, even when AI did the scaffolding. Those are very different things. One is deceptive. The other is using the tools available to produce better work, which is what every generation of professionals has always done.

Frequently asked questions

Are brands legally required to disclose AI-generated content?
In most markets, there is no blanket legal requirement yet for general marketing content. However, the IAB’s AI Transparency and Disclosure Framework (January 2026) requires disclosure for synthetic humans, digital twins, and AI chatbots that could mislead consumers. Regulatory requirements are expanding, and the safest assumption is that they will grow, not shrink.
Does disclosing AI content hurt brand trust?
Research shows 32% of consumers would trust a brand less if they knew content was AI-generated, while 15% would trust more. However, failing to disclose and being caught is significantly more damaging: 59% of consumers cite non-disclosure as a trust-breaker. The risk of hiding AI use is higher than the risk of disclosing it thoughtfully.
What should an AI content disclosure actually say?
Simple and specific performs best. Something like “This content was created with AI assistance and reviewed by our team” covers most use cases. For AI-generated images, “Created with AI” is standard. Avoid legalistic language that feels evasive. The goal is to inform, not to disclaim liability.
What types of content need AI disclosure?
The IAB framework specifically requires disclosure for synthetic humans in video or images, AI chatbots that could be mistaken for humans, and digital twins. For other AI-assisted content including written drafts and image editing, voluntary disclosure is becoming best practice even where not yet legally required.
How do I keep brand content feeling human when using AI?
Use AI for structure, research, and first drafts, then edit heavily with your own voice, specific examples, and genuine opinions. The more AI handles the scaffolding and the more you handle the judgment and perspective, the more human the result feels. Never publish AI output without a substantive human edit.
About This Article

Why we cover AI transparency for marketers

Most AI marketing content focuses on how to use AI tools, not on the downstream trust implications. This article addresses the gap between widespread AI adoption and the disclosure practices that protect long-term brand relationships. Written for marketing directors, brand managers, and content leads who are already using AI and need to think through what that means for their audience relationships.

Sources

  1. [1] Mission Media Asia. AI Content Disclosure Gap: 78% Use, Few Disclose. 2026.
  2. [2] Daily Guardian. Consumers demand AI transparency from brands, global study finds. 2026.
  3. [3] IAB. AI Transparency and Disclosure Framework. January 2026.
  4. [4] Getty Images. Nearly 90% of Consumers Want Transparency on AI Images. 2026.
  5. [5] SmythOS. The AI Content Trust Gap: Why 73% of Consumers Can Spot and Reject AI-Generated Marketing. 2026.
  6. [6] Yahoo & Publicis Media. AI Ad Disclosure Increases Consumer Trust. 2026.
Hina Mian
Hina Mian — Co-Founder, Future Factors AI

Hina brings 10+ years of marketing strategy and brand growth experience to the AI conversation. She helps businesses and teams cut through the noise and apply AI where it actually matters. Future Factors offers AI Bootcamps, Corporate Workshops, and Speaking & Consulting for organisations ready to move from AI-curious to AI-confident.

More about Hina →

Psst, Hey You!

(Yeah, You!)

Want helpful AI tips flying Into your inbox?

Weekly tips. Real examples. Practical help for busy professionals.

We care about your data, check out our privacy policy.