← Back to Blog
industry9 min read

Dynamic Creative Optimization Is Dead. Here’s What Replaced It.

Spencer Merrill|
Dynamic Creative Optimization Is Dead. Here’s What Replaced It.

Dynamic creative optimization sounded like the end of creative guesswork. Feed it your headlines, images, CTAs, and audiences, and it would automatically assemble the best combinations, serve them, learn from the data, and optimize in real time. No more manual A/B tests. No more guessing. Just algorithmic efficiency.

The reality: DCO is a machine that optimizes spending on creatives after you’ve already paid to find out which ones lose.

How DCO Actually Works (And Why That’s a Problem)

DCO operates on a feedback loop. It serves variants, collects impression and click data, assigns weights to better-performing combinations, and gradually shifts budget toward winners. Sounds smart. But consider what has to happen before it can learn anything:

  • Minimum impressions per variant. Statistical confidence requires volume. Before DCO can confidently declare a winner, every variant needs enough exposure to produce reliable signal. For smaller accounts, that alone can exhaust test budgets before the algorithm converges.
  • The learning phase tax. Meta, Google, and TikTok all impose a learning phase, typically 7–14 days and 50+ conversion events, before their algorithms stabilize. During that window, you’re paying above-market CPMs for sub-optimal delivery.
  • Losers run on your dime. The entire premise of in-flight optimization is that some creatives will underperform while the system figures out which ones those are. Every impression served to a losing creative is budget that can’t be recovered.
  • Platform lock-in. DCO tooling is built inside each platform’s ad manager. Insights from Meta’s DCO don’t transfer to TikTok. You run separate tests on each platform, pay the learning phase cost each time, and end up with fragmented creative intelligence that lives inside walled gardens.

DCO was engineered to solve a platform problem (efficiently delivering personalized ad experiences at scale) not a creative problem (knowing which creative will win). Those are different jobs. Conflating them is how brands end up paying for learning they could have done before they spent a dollar.

DCO vs. AI Pre-Launch Testing: Head-to-Head

Dimension DCO AI Pre-Launch Testing
When it runs After launch, in-flight Before launch, pre-spend
Budget required to start Yes, learning phase required No media budget required
Time to first insight 7–14 days (learning phase) Minutes
Platform dependency Platform-specific, siloed Platform-agnostic
Losers run on your budget Yes, always No, eliminated before launch
Creative insight ownership Stays inside platform You own the data
Minimum scale to be useful High (enterprise/scale accounts) Any account size
Explains why a creative wins No, black box optimization Yes, qualitative explanations
Works across platforms One platform at a time Test once, inform all platforms

The Decline Is Real

Search interest in “dynamic creative optimization” has been flat or declining since 2022. The practitioner community has grown skeptical. Post-iOS 14, the signal quality that DCO depended on, specifically granular conversion events flowing back to the platform, degraded substantially. DCO needs data to learn. Less data means slower learning, longer learning phases, and higher costs to reach statistical confidence.

Meanwhile, the pace of creative production accelerated. AI tools now let teams produce dozens of variants in hours. The bottleneck isn’t generating creatives anymore: it’s knowing which ones to bet on before you run them.

That’s a fundamentally different problem than DCO was designed to solve.

What Replaced It: Predict Before You Spend

The new model flips the sequence. Instead of launching and optimizing, you predict and then launch. The intelligence moves upstream, before media spend, before learning phases, before losers can drain your budget.

This is what AI pre-launch testing does. You upload your creative variants, define your target audience, and get a ranked prediction of which creatives will perform, before a single dollar goes to media.

The mechanics are different from DCO at every level:

  • No minimum spend requirement. Prediction happens offline. You don’t need impressions to get signal.
  • No learning phase. There’s no algorithm to train. The model evaluates your creatives against a synthetic representation of your target audience immediately.
  • Platform-agnostic. Pre-launch testing doesn’t care what platform you’re running on. You test once and know which creative to push everywhere.
  • Losers never run. You eliminate underperformers before launch. The only creatives that hit the platform are the ones predicted to win.

The ad testing platform model treats creative evaluation as a data science problem to be solved upstream, not a media buying problem to be solved in-flight.

The Compounding Advantage

Pre-launch testing doesn’t just save the immediate budget you would have spent on learning phases. It compresses the creative iteration cycle. When you can evaluate a new batch of creatives in hours instead of weeks, you run more tests. More tests means faster learning. Faster learning means your creative strategy improves at a pace that in-flight optimization, with its 14-day learning windows, simply cannot match.

Teams using pre-launch testing can iterate faster because they’re not waiting 14 days per test cycle. The economics compound: each iteration starts from a higher baseline because you already eliminated the obvious losers in the previous round.

When DCO Still Makes Sense

DCO is not a dead technology. For specific use cases, it’s genuinely the right tool:

  • Real-time personalization at scale. If you need to assemble ads dynamically from live data like product availability, pricing, and geo-specific offers, DCO’s runtime assembly is necessary. Pre-launch testing can’t predict the performance of an ad that doesn’t exist until the moment of serving.
  • Large creative libraries. If you have 200+ creative components and need to find high-performing combinations, DCO’s combinatorial exploration is valuable. Pre-launch testing is better suited to evaluating a bounded set of variants, say 5–20.
  • Enterprise accounts with data signal. DCO works best when the platform has rich signal to learn from. High-volume enterprise accounts with strong conversion data can reach statistical confidence faster, making the learning phase cost more palatable.

The honest summary: DCO is infrastructure for scale. It requires volume to justify the learning phase cost and enough conversion data to optimize against. For the vast majority of advertisers—DTC brands, performance marketing agencies, SMBs running $5K–$100K/month in ad spend—DCO is the wrong tool for the problem they actually have.

The Problem DCO Can’t Solve

The creative decision that matters most happens before launch: which direction to bet on. Should this campaign lead with price, with social proof, with a problem-solution frame, or with lifestyle imagery? DCO doesn’t answer that question. It optimizes within whatever creative library you give it. Garbage in, garbage out. Just optimized garbage.

Pre-launch testing answers the strategic question. It tells you which creative direction resonates with your target audience before you invest in production and media. That insight shapes the creative library you give to DCO (if you use it) and the creative strategy you take into any ad platform.

The Workflow Shift

The transition from DCO to pre-launch testing isn’t just a tool swap. It’s a workflow change. The creative team moves from “test and learn” to “predict, confirm, scale.” The short version: generate variants, test before any spend hits the platform, launch only the predicted winners. We wrote the full step-by-step in the canonical generate–test–launch workflow—worth a read if you’re setting this up for the first time.

Step 2 is where A/B testing falls short, and it’s also where DCO was never doing the right job. The in-flight optimization that DCO provides is a workaround for not knowing your answer upfront. Pre-launch testing gives you the answer upfront.

For a deeper look at what the AI approach actually involves, read the AI ad creative guide or see how Kettio does it.

The Combined Approach

Pre-launch testing and DCO are not mutually exclusive. For accounts that use both: pre-launch testing informs what goes into your DCO creative library. Instead of feeding DCO 50 untested variants and letting it sort out the losers, you feed it 15 pre-validated variants. DCO’s learning phase runs faster (smaller, better creative set), costs less (fewer losers to surface), and produces a better winner (starting from a higher baseline).

The question is where you put the intelligence. Putting it upstream, before spend, is almost always the right answer.

Here’s the question worth sitting with: what are you actually paying for when a losing creative runs for 10 days while the algorithm figures out it’s a loser? Not data. Not learning. Budget. The only way to avoid paying for that is to move the evaluation upstream.

DCO optimized the wrong thing. Stop paying to discover losers in-flight. Predict your winners before you launch.

dynamic creative optimizationdcocreative optimizationad testingai advertisingcomparison