IAB’s AI Framework: Making Transparency Part of Creative

Publisher Collective
Elysée
February 13, 2026
5 min read
IAB’s AI Framework: Making Transparency Part of Creative
More like this
About the Author
Publisher Collective IconElysée
Elysée
Yhuel

Elysée is Publisher Collective's Marketing Executive. She keeps our team up-to-date by researching and writing about the latest AdTech trends and creates our publisher newsletter. With a background in academia, Ely is passionate about making complex industry topics clear and engaging for readers.

Share

If you’ve spent any amount of time online recently, you’ve probably seen ads that practically scream “Made by AI.” Whether it’s oddly generated visuals or copy that feels generic and hollow, AI‑generated advertising is increasingly prevalent.

And it’s no wonder why. With tightening advertiser budgets, AI represents a cost-efficient and time-saving solution. But while 89% of advertisers who use generative AI disclose it at least occasionally, fewer than half do so consistently.

That’s a big reason why the Interactive Advertising Bureau’s (IAB) new AI Transparency and Disclosure Framework is so important. These guidelines aren’t about slowing down innovation, rather, they’re about giving structure to how AI is used in advertising and pushing the industry toward greater transparency and trust. For advertisers, this shift represents a strategic pivot with implications for creativity, credibility, and consumer perception.

In this post, we’re breaking down the regulations, why they matter to advertisers, and implications it has for the ad industry.

Breakdown of IAB’s AI regulations

The framework uses a practical, risk-based approach. Rather than labeling everything, it recommends disclosing AI only when it meaningfully impacts the content’s authenticity, identity, or how it represents people or ideas in ways that could mislead consumers.

Some examples include:

  • AI-generated images or videos created from prompts (text-to-image or image-to-image), where human involvement is limited to refining, editing, or compositing content to portray real-world events.
  • AI-generated voices of deceased individuals creating statements they never actually made, even if authorized by their estate.
  • AI-generated voices of living individuals producing statements about specific events, actions, commitments, or situations that didn’t happen, separate from scripted commercial endorsements or brand messaging.
  • Digital “twins”, or replicas of deceased individuals in any form, even with estate authorization.
  • Synthetic avatars, AI chatbots, or conversational agents in advertising that mimic human interaction.

To this end, the Framework uses a two-step, risk-based approach:

  • Clear consumer-facing labels or icons for AI content, placed thoughtfully next to the creative.
  • Behind-the-scenes machine-readable metadata to ensure technical compliance.

Why trust matters as AI fatigue rises

So what’s the catalyst behind this framework? Turns out, contrary to what many advertisers believe, most consumers look upon AI ads unfavorably.

According to research by the IAB and Sonata Insights, advertisers tend to focus more on how AI affects human creativity, implementation costs, and brand authenticity than on how consumers perceive it.

On the flip side, Millennials and Gen Z consumers are extremely skeptical of businesses using AI in their advertising. Gen Z, in particular, is almost twice as likely as Millennials to view AI ads negatively (39% vs. 20%), with many describing brands that use AI as inauthentic, disconnected, or unethical.

Most Millennial and Gen Z consumers claim that clear disclosure would increase (or not impact) their likelihood to buy a product or service.

This trend reflects a broader shift in consumer expectations. Audiences are calling for authenticity in an open web flooded with AI-generated content. Creators are increasingly scaling back AI use or clearly showing where human creativity plays a role. Brands, too, are embracing content that feels real, complete with imperfections, and some even require influencers to avoid AI or disclose its use.

In this environment, trust is no longer optional. Brands that prioritize transparency can cut through AI fatigue, strengthen connections with their audience, and differentiate themselves in a crowded marketplace.

Key takeaways

Here’s what this means for you and your advertising strategy: these insights show how to use AI responsibly while building trust, protecting your brand, and connecting with audiences.

Understand where AI assists versus fully generates content.

Knowing the degree of AI involvement, whether it’s helping a human refine ideas or fully creating content, is critical for deciding when and how to disclose its use.

This clarity can also inform your broader advertising strategy. Some brands may choose to fully embrace AI in certain campaigns, while others may decide to limit or avoid AI use in advertising entirely, prioritizing authenticity or creative control.

  • Be intentional about AI’s role to help you ensure transparency, maintain trust, and help you align your creative approach with audience expectations.

Disclosure builds consumer trust.

Clear, transparent disclosure of AI-generated content, whether it be images, videos, voices, avatars, or chatbots, can go a long way in addressing AI fatigue.

  • Label AI usage. Doing so signals honesty and respect for the audience, which can improve perceptions of authenticity, credibility, and integrity for a brand. Over time, this can translate into stronger, longer-lasting relationships with consumers who feel informed rather than misled.

Consumer perception matters.

While advertisers often focus on internal concerns, like costs, efficiency, or creative workflow, consumer sentiment is a critical factor that can’t be ignored. We already know that younger audiences are highly sensitive, so make sure there is clear disclosure, as it will not only mitigate negative reactions, but also support purchasing decisions and boost brand loyalty.

  • Clear disclosure mitigates negative reactions and can also influence purchasing decisions and brand loyalty.
  • How consumers perceive your AI usage can be just as important as the creative content itself.

A risk-based approach makes disclosure practical.

The IAB’s Framework avoids blanket labeling, encouraging disclosure only when AI materially affects authenticity, identity, or representation in ways that could mislead consumers.

  • Be compliant without overwhelming creative teams.
  • Focus on disclosure where it matters most. This way, brands can be both responsible and efficient.

Transparency can help performance.

Thoughtful disclosure doesn’t just protect your brand—it can actively strengthen your advertising performance. When audiences understand which parts of ads are AI-assisted or fully AI-generated, they’re more likely to engage with the brand.

  • Transparency differentiates your messaging in an already crowded advertising landscape, improving trust and even driving higher conversions.
  • Make  disclosure part of a strategic creative advantage, to help your brand connect more deeply with your audience while staying true to regulatory standards.

How We Can Help

At Publisher Collective, we help advertisers like you navigate these industry changes with strategies that prioritize both creativity and impact. We can create high-quality, branded content and leverage high-impact ad formats, so that your campaigns feel authentic and resonate with audiences. Plus, we connect you with niche, high-intent audiences, going beyond broad reach to deliver engagement where it matters most, helping your campaigns drive meaningful results.

Interested? Feel free to reach out to our team.

Book a call with an expert

We pride ourselves on creating meaningful relationships with our publishers, understanding their priorities and customizing our solutions to meet their unique needs.