The three dimensions of media measurement planning: Type A, B, and C
There’s No One-Size-Fits-All Measurement Planning
And That’s a Feature, Not a Bug.
The right measurement plan depends on three moving parts:
YOUR OWN Business/Client/Team Context: budget, data maturity, risk tolerance, financial goals, among others.
YOUR OWN Scope/Ownership/Contracts: media channels, creative levers, timelines, among others.
YOUR OWN audience’s behavior shift goal: in this case you audience might be the business’ consumer (awareness, intent, sales), the actual business-side (cost, time savings), or simply internal stakeholders (moving from equity metrics to sales metrics).
In this quick post I show the three broad measurement plan types I rely on, when each of them makes sense, and any potential pitfalls. All of this along with examples from my work.
Type A: High-Level, Easy-to-Understand, but IMPACTFUL
Key Concept: the quality of an intervention (for creative, the impression or view) should be measured by the relative execution NOT ONLY the cost of the execution. Why? Because the cost of the execution depends more on the platform it is being executed.
Good for: Quick alignment on what matters, especially when a team is stuck on misaligned performance goals such as “cheap impressions with no lift” versus “impactful but expensive impressions.”
How it works:
Start with TWO KPIs, NOT TEN. I like Effectiveness (Δ in desired outcome) on the Y-axis and Efficiency ($ per outcome) on the X-axis.
Plot channels/creatives (or any other comparable breakdown) as bubbles. Anything in the upper-left quadrant is your hero (high impact, low cost).
Why it resonates:
Jargon-free: execs get it
Actionable: Helps shift the metric objective from efficiency-only to a portfolio view of efficiency and effectiveness
Quick to launch:
Watch-outs
Treat it as directional if sample sizes are small
The ability for something to be “high impact” can be noisy over time
Example: The DART Decision Framework showed the Verizon VB team how their media KPIS map to creative effectiveness and efficiency. After conceptualizing the issue, their team quickly shifted to have a more balanced view of their creative performance benchmarks.
Type B: Traditional Funnel-Based Planning
Key Concept: don’t sacrifice clarity and alignment. Having those two is already a win. Instead, establish a LEARNING AGENDA and prove that changing something is worth it.
Good for: when you need everyone (agency, client, finance) to lock hands on the basics.
How it works:
Map funnel stages (Awareness → Consideration → Action → Acquisition → Loyalty → Sales).
For each stage, pick one primary and one secondary metric and only use a third metric if it really acts as a leading indicator.
Schedule precise cadences: not necessarily extra-frequent (weekly readouts for upper funnel, daily or real-time for lower funnel).
Why it resonates:
Everyone already knows the funnel. The debate shifts to timing & value-adds (adding brand search lift to Consideration).
Resource planning becomes clearer: if Loyalty metrics require CDP data, mar-tech can scope the integration early.
Watch-outs
Funnels oversimplify messy consumer journeys, decreasing the chance to find actual insights. Therefore, you should always pair with post-campaign MMM, attribution to validate, or an analytics/data team.
Example: clearly explained to Marcus by Goldman Sachs team what measurement planning entails, what it includes, and what to expect. Debate was minimal because this is often all you need when a funnel is established.
Type C: Granular, Highly-Dependent Planning
Good For: new capabilities, addressing key executional issue, among others
How it works
Diagnosis first: confirm the business needs that justify the complexity (for ex, real-time creative swaps to cut CPA by 20 %).
Blueprint the stack – ad server, DSP, DCO vendor, data warehouse, BI layer.
Plan experiments – holdouts, geo-splits, or ghost ads to isolate lift amid the rapid creative churn.
Why it’s worth the effort
Personalization at scale – creative variants auto-optimise toward micro-audience signals.
Embedded testing – every impression effectively becomes an A/B cell, accelerating learning cycles.
Watch-out
If stakeholder patience or data quality is low, DCO can become an expensive science project. Start with a pilot line of products or audiences.