Blog

Achim’s Razor

Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.

0 Articles
Insight

Causal CMO #1: Attribution still dominates GTM. That’s a problem.

GTM teams still rely on attribution. That’s a problem. Learn how causality models reduce risk, reveal drivers, and rebuild CMO credibility.
June 10, 2025
|
5 min read

Many GTM teams continue to rely on correlation to justify decisions. It’s an ongoing problem. In this kickoff to our Causal CMO Series, Mark Stouse and I cover why marketing attribution continues to fail, how internal data keeps getting “engineered”, and how new legal rulings will put GTM leaders directly in the line of fire.

Takeaways

  • Correlation can’t explain real outcomes. Causality can.
  • Internal GTM data is often manipulated under pressure. Not malicious, just human.
  • Delaware rulings make all officers, including CMOs, accountable for data quality.
  • Attribution logic like first, last, and multi-touch is correlation, not cause.
  • Causal models start from business outcomes and force teams to reverse-engineer what actually moved the needle.

Welcome to the Causal CMO

Mark Stouse and I kicked off the first session with a direct conversation about how marketing attribution models still hold GTM teams back from reporting real-world buyer behavior. Too many are still stuck in correlation, and it’s costing them. 

Correlation feels safe. It’s anything but.

Marketers look for patterns. It’s what we've been trained to do. But in complex, time-lagged buying cycles like in B2B tech, correlation doesn’t tell you anything useful, like what actually caused the deal to close (or not).

Tom Fishburne's cartoon showing a business meeting where a team misinterprets correlation between sales and shaved heads, humorously illustrating flawed logic in marketing data analysis.
Source: Marketoonist, by Tom Fishburne

Mark made a great point: correlation is binary. It either exists or it doesn’t. That makes it easy for humans to understand, which is why we gravitate toward it. But it’s not how markets work. It’s not how buyers behave. It’s not what happens in the real world. 

Causality tells you what actually led to the outcome. It’s multidimensional. It accounts for context, time lag, headwinds and tailwinds, and everything else correlation ignores. That’s why it’s harder to fake and why it’s so valuable.

Many GTM teams still lean on correlation because it’s faster and easier to defend, especially under pressure. As Mark pointed out, correlation-based patterns are easy to manipulate. If the data doesn’t match the story you need to tell, you can tweak the inputs until it does. That’s why attribution charts and dashboards persist: they give the illusion of precision without exposing the actual drivers. It looks clean. It feels controllable. But it’s a shortcut that hides the real story.

An intuitive decision is either really right or really wrong. No leadership team can afford that kind of spread.

Your data is lying to you.

Mark shared a case where a fraud detection tool scoured over 14 years of CRM records. More than two-thirds of the data was found to be “engineered.”

It wasn’t one bad actor. It was many people over many years, under tremendous pressure to prove their seat at the table, while slowly reshaping the story to fit what they needed to show.

People use data to defend themselves, not to learn.

This kind of manipulation is hard to catch with correlative tools. Causal systems, on the other hand, make it obvious when something doesn’t add up. It’s unavoidable. 

Delaware changed the rules. CMOs are now on the hook.

Unlike Sarbanes-Oxley, which took years to pass and gave leadership time to prepare, the Delaware rulings came quickly and without warning. Mark called it one of the biggest blind spots in recent corporate memory. 

The 2023 McDonald’s case expanded fiduciary oversight beyond the boardroom. Now every officer (CMOs included) is legally accountable for the quality and integrity of business data.

The writing’s on the wall. If you’re not governing your data, you’re exposed.

Data quality is now the number one trigger for shareholder lawsuits. CRM systems are full of data manipulation and missing governance. Lawyers know it’s an easy audit. If your GTM data can’t hold up under causal scrutiny, you’re exposed.

Attribution isn’t just flawed. It’s obsolete.

First-touch. Last-touch. MTA. Even Bayesian models. They’re all correlative. They’re all easy to manipulate. And they all fall apart under scrutiny.

Mark told the story of a meeting where a CIO changed the MTA weightings mid-call, then had someone else change them again. Marketing freaked out, but had no causal rationale to defend their weightings. The numbers were all made up.

If your model changes when you tweak the sliders, it’s not a model. It’s a toy.

Jon Miller, founder of Marketo and Engagio, recently said himself that Attribution is BS

Respect to Jon for saying it out loud. That’s a bit like the first step in the 12-step program: admitting you were wrong. And that’s where every CMO still holding onto attribution logic has to start. 

Mark followed up with his own take on why causality is quickly becoming the standard.

Both are worth reading. 

Marketing is probabilistic, not deterministic.

Causal models don’t promise certainty. They help you understand what likely led to an outcome, accounting for what you can and cannot control. 

Mark compared it to throwing a rock off a building. Gravity ensures it will hit the ground every time, but where it lands and what it hits is a different question, especially when you consider things like time of day, weather, etc.

Two-panel black-and-white cartoon showing a rock falling from a building—during the day toward a crowd of pedestrians, and at night onto an empty street—highlighting how context changes the outcome of the same action.
Gravity is the constant. Everything else is a variable. (Image generated by AI)

It’s the same with marketing. You know your efforts will have an effect. What you need to model is the direction, magnitude, and time lag.

Start with outcomes. Work backwards.

The shift to causality has nothing to do with better math. As Mark said, if a vendor’s pitch is built on “new math,” you should run. The math already exists, and it works. 

What matters is asking the right questions. Don’t start with your data. Start with the outcome you care about.

  • What moved deal velocity? 
  • What increased the average contract value? 
  • What pulled new buyers into your sales process?
Start with the outcome. Work backwards. See if the data supports it.

That shift exposes where the real drivers are. And it resets expectations for performance. 

Mark compared it to baseball. If you hit .350 in the majors, you’re a Hall of Famer, even though you failed two-thirds of the time. Baseball is full of external variables players can’t control. 

Side-by-side comparison of Babe Ruth’s .342 batting average with a failing test score, showing how success in high-variance environments like baseball differs from academic grading.
Marketing is more like baseball than academia.

In academia, it’s the opposite. Most of your GPA is in your control. 

GTM should be judged like baseball but it’s not. Markets are messy. Nothing is certain. Causal modeling reflects that uncertainty by showing you what data you’re missing or misreading. 

But traditional marketing metrics like attribution expect certainty. Marketers are held to a similar standard as academia. This makes zero sense given how uncertain markets are. 

And therein lies the problem, and it is a critical insight for GTM teams. We’re trying to make sense of uncertainty using tools that assume predictability. The classic square peg through a round hole.

Attribution tools weren’t designed for complexity or context. They were built to assign credit. They don’t help GTM teams. They polarize them.

Final Thoughts

Mark explained a few key things every GTM team needs to plan for, including how correlation fails, why causality matters, what legal risk CMOs now face, and how to move beyond attribution logic in B2B GTM.

In Part 2, we’ll get into the mechanics. What causal models actually look like. How to map time lag and external variables. And how to build something your CFO will trust.

Missed the session? Watch it on LinkedIn here.

In the meantime, here’s a quick FAQ to clarify the core ideas, especially if you're new to the conversation or want to share this with your team.

  • What’s the difference between correlation and causality in marketing?
    Correlation shows when two things happen together. Causality shows what actually drove the result, taking time lag, context, and other variables into account.
  • Why are attribution models unreliable?
    Because they’re based on correlation. They’re easy to manipulate, and they rarely reflect what actually influenced the outcome.
  • What’s the legal risk for CMOs in 2025?
    Delaware rulings now hold all corporate officers, including CMOs, legally accountable for data quality. Shareholder lawsuits are already targeting flawed CRM data.
  • What’s a better alternative to MTA or last-touch models?
    Causal modeling. It starts from outcomes and works backwards to isolate what actually moved the needle.
  • Do I need better data to start?
    Not necessarily. You need better questions. Causal models help you figure out what data matters and where the gaps are.

If your dashboard still runs on attribution logic, this is your chance to change it.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Better At vs. Better Than: How GTM Teams Get Out of Their Own Way

Learn how GTM teams stop tripping over each other by shifting from ego to impact. Better At vs. Better Than—what mindset drives growth?
June 3, 2025
|
5 min read

GTM teams trip over each other because of culture, not strategy. “Better At” is a mindset that can shift your team from competing for credit to actually getting better at working together. Be “Better At” curiosity, not control; showing up to contribute, not one-upping. If you’re leading a B2B team and you’re tired of the same old drama, this one’s for you.

Takeaways

  • GTM teams break down when people fight for credit instead of solving real problems together.
  • Strained relationships across GTM teams is a trust issue.
  • Being curious, generous, and clear beats being clever, loud, or right.
  • Marketing teams that ask better questions create more value.
  • Better At means improving the team while you improve yourself.

Better At, Not Better Than

We don’t need more clever acronyms or prompts; another playbook or dashboard.

We need braver marketers. People who care more about showing up and improving their tribe.

That’s what Tracy Borreson and I got into during her Crazy Stupid Marketing podcast.

We started with marketing. But the conversation kept pulling us deeper into mindset, culture, and how GTM teams can stop tripping over each other.

Our discussion built on what I shared earlier in this LinkedIn Pulse article and led to a thoughtful question:

What happens when we stop trying to be better THAN each other… and start getting better AT helping each other?

How “Better At” Started

This idea started in a personal place. A strained conversation. A moment that reminded me we’re all just doing our best with what we’ve got.

And a quote from Epictetus.

“These reasonings do not cohere: I am richer than you, therefore I am better than you; I am more eloquent than you, therefore I am better than you. On the contrary these rather cohere: I am richer than you, therefore my possessions are greater than yours; I am more eloquent than you, therefore my speech is superior to yours. But you are neither possession nor speech.”
 
Epictetus, Enchiridion, Chapter 44

That stuck with me.

Being better at something doesn’t make you better than someone.

And if you’re better at something, what if you helped someone else become better at it too? What if they got better at it and showed someone else?

That’s the heart of it.

Better At is about think how we can improve others while improving ourselves.

We grow. We pay it forward. We do the work and learn together.

Why Marketing Needs This Shift Now

There’s an old joke that goes something like this:

How many marketers does it take to screw in a lightbulb? Just one. But all the others think they can do it better.

Every aspect of an organization is full of this kind of “better than” behavior, not just marketing.

We chase credit, one-up each other, and cover our asses by throwing each other under the bus. 

We can’t help it. It’s systemic and it starts at an early age.

It kills effectiveness and alignment no matter how efficient or better we think our silos are.

0 Effectiveness X 5 Efficiency = 0

If cross-functional teams can’t co-create value, no amount of leadgen and demandgen will save them.

Better than creates friction. Better at creates connection. 

Creating a culture of curiosity starts by changing what you reward. 

Stop Competing. Start Contributing.

Healthy competition is good. Sports is a good example.

But you can’t win the hearts and minds of your teammates by always competing with them or looking for the “easy button” to make yourself bigger than you are.

AI can help you be better at marketing. But only if it sharpens your thinking, not replaces it.

If your team’s output feels generic, the problem isn’t the tool. It’s the fear behind how it’s being used.

Be generous, empathetic, and useful. Ask better questions. You have to be the change you want to see. That’s how you get better at making better contributions.

Colonel Chris Hadfield quote: Things aren’t scary. People are scared. Every single person you meet is struggling.

Build a “Better At” Culture

If you lead a B2B tech company and this resonates, here are 3 things to consider:

  1. Audit your language. Are your teams talking about credit or contribution? Check your Slack threads, meeting notes, and handoff docs.
  2. Ask the multiplier question. What are we X-times great at, but getting zero return from because we’re not aligned in our thinking?
  3. Run a “Better At” session. Bring in your GTM leaders and ask: Where are we trying to be better than each other, when we could be better at something together?

You don’t need a re-org. You just need a shift in thinking.

When GTM teams work together, the impact shows up in shorter sales cycles, better conversion rates, and less wasted spend.

A Simple Ask

You don’t need to overhaul your GTM strategy overnight. But what if you started asking different questions?

Take these into your next leadership meeting:

  • Are we competing internally, or contributing collectively?
  • What would it look like to be better at partnerships, handoffs, and feedback?
  • Where are we rewarding performance over progress?
  • How are we creating a marketing culture of curiosity, not compliance?
  • And where are we still acting like attribution is more important than alignment?

Better At isn’t a tactic, a course, a playbook. It’s a mindset.

So… where are you trying to be better than, when you could be better at?

Let’s talk about it. 

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

Bayesian ≠ Causal: Why GTM Metrics Still Miss the Mark

Most B2B GTM teams confuse correlation with causation. Learn how Bayesian and causal models differ and how to measure what actually drives results.
May 27, 2025
|
5 min read

Many GTM teams still track activity instead of impact. Bayesian models estimate what likely played a role. But they don’t explain what caused the result, or what might have happened if something changed. Causal models test and measure what contributed and why.

Takeaways

  • Attribution shows what happened. It doesn’t explain what contributed and why.
  • Bayesian models estimate probability, not cause.
  • Causal models test what would’ve happened if you did something different.
  • Causal AI applies the logic behind causal modeling at scale.

Precision and Truth Are Not the Same

As mentioned in last week’s Razor, Bayesian models can help marketers get out of the last-touch attribution trap. They give us a way to estimate how likely it is that something contributed to a particular result.

But that’s not the same as knowing what caused the result and why.

Too many GTM teams still confuse probability with proof; correlation with causation. But more precision does not mean more truth. 

Causal models answer a different question: what would have happened if we had done something else? That’s what your CFO wants answered. And it’s the one your current model can’t touch.

We need to ask better questions instead of defending bad math.

Much of this discussion was sparked by Mark Stouse on LinkedIn. He clarified a common misconception: that Bayesian modeling is the same as causal inference. It’s not. And that distinction is what we’re getting into.

The past is not prologue.
 
Mark Stouse, CEO, Proof Causal Advisory

What Most GTM Teams Still Get Wrong

Most attribution models are shortcuts, not models.

Rule-based. Last-touch. “Influenced” revenue. They’re easy to run. Easy to explain. But disconnected from real buying behavior.

Attribution measures who gets credit, not contribution.

Bayesian modeling doesn’t rely on a single touchpoint or fixed credit rule. It estimates how likely it is that something played a role, like a channel, sequence, or delay.

Bayesian models give you a better approximation of influence than rule-based methods. But they stop short of answering the causal question: What made this happen and why?

Most attribution models never get past basic association (correlation).

As Jacqueline Dooley explains in this MarTech article, rule-based methods don’t reflect how buying actually works. They measure what happened, not why it happened.

In other words, most GTM teams are still stuck in Level 1 of the causal ladder.

Pearl’s Ladder of Causation adapted for GTM measurement stages.

What Bayesian Models Are Good At

Bayesian models help you estimate whether something played a role. Not how much credit to assign.

That’s why they help measure things like brand recall, ad decay, and media saturation. They estimate influence but they don’t explain the cause. 

Bayesian vs. Causal Models: What They Can and Can’t Tell You

Bayesian Causal
Output How likely something contributed What would have happened if something changed
Based on Observed behavior Structured interventions and counterfactuals
Strengths Estimates influence, handles uncertainty Simulates alternate outcomes, proves effect
Limitations Doesn’t explain why Requires strong assumptions and structure
Used for Brand recall, decay, media saturation Forecasting, lift tests, strategy simulation

Mark isn’t the only one pushing for clarity here.

Quentin Gallea wrote an excellent article on Medium that details how machine learning models are built to predict outcomes, not explain them. They’re correlation engines. And when teams mistake those outputs for causal insight, bad decisions follow.

If your model only shows what happened under existing conditions, it can’t tell you what would’ve happened if something changed. That’s the whole point of causal reasoning.

Causal AI tools like Proof Analytics (shown below) helps teams run “what if” scenarios at scale. It uses machine learning to handle the messiness of data with causal logic to explain what can actually make an impact.

Causal AI tools like Proof Analytics help teams run “what if” scenarios at scale.

What Causal Models Tell Us That Bayesian Models Can’t

Causal modeling shows what might have happened if you changed something, like timing, budget, message.

That’s the question your CFO is already asking.

As Mark pointed out, Bayesian models can’t answer that. Unless you impose a causal structure, they just update likelihoods based on what already occurred.

If you’re only predicting what’s likely under existing conditions, you’re stuck in correlation. 

Click Path vs. Causal Chain

What we track in attribution (Click Path) What causal models simulate (Causal Chain)
Ad → Webinar → Demo → Closed Won Ad removed → Webinar → Demo → ?
Ad → No Demo → No Sale Ad replaced → Event → Closed Lost
Ad → Case Study → Sales Call → Closed Won Same sequence, different budget → ?

What to Measure 

As mentioned, GTM dashboards show you what happened, like clicks. They don’t tell you what contributed to those clicks and why. 

Bayesian models help you spot patterns. 

  • How often something showed up. 
  • How long it stuck. 
  • How likely it played a role.

That’s useful. But it’s not enough.

Why? Because even though Bayesian models are probabilistic, they don’t model counterfactuals unless a causal structure is added. They estimate likelihoods, not outcomes under different conditions.

If you want to know whether something made a difference (or what would’ve happened if you did it differently) you need a model that can test it.

So instead of more data, focus on the data you already have.

Move Beyond Focus More On
Rule-based attribution Patterns of exposure over time
Clicks and form fills Contribution across all channels
Volume of MQLs Influence on decision-making
Campaigns measured in isolation Cumulative brand and media impact
Basic activity metrics Probable cause, not just correlation

By the way, if you’ve never looked at buyer behavior through a statistical lens, Dale Harrison’s 10-part LinkedIn series on the NBD-Dirichlet model is worth bookmarking. This series will help you understand how buyers typically behave in a category:

  • how often most people buy (70% of all purchases are made by light buyers, not heavy ones)
  • how rarely they buy from the same brand twice
  • why brand growth depends more on reaching more buyers than retaining the same ones

Final Thoughts

Rule-based attribution like first/last-touch only tracks what happened. It doesn’t explain what mattered.

Bayesian modeling gets you closer by helping you see patterns. But it doesn’t explain cause.

Causal models let you test what could make an impact, what may not, and why.

And as Mark Stouse pointed out, this only works if you’re using a proper causal framework. Bayesian models can’t tell you what caused something unless that structure is built in.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

How Bayesian Models Measure Brand Impact Before Buyers Click

Bayesian models show what moved buyers before they clicked. Learn how to prove brand impact and fix last-touch bias in your B2B attribution strategy.
May 19, 2025
|
5 min read

Traditional attribution models do not help marketers. Last-touch attribution winds up becoming click-based marketing metrics that rarely hold up when the CEO or CFO asks, “Where’s the revenue?” Bayesian models offer a better way to measure what’s actually impacting the bottom line, building the brand, and influencing pre-funnel activity. This article shows you how to measure brand impact using Bayesian attribution models, especially for B2B teams tired of broken funnels.

Takeaways

  • Last-touch attribution is a marketing-sourced metric trap. It over-credits the final click and underestimates the impact of brand-building.
  • Bayesian models help us account for when and how a touchpoint influences conversion, not just if it does.
  • Ad fatigue happens when too many impressions decrease conversions.
  • Familiar brands benefit from within-channel synergy; unfamiliar brands need cross-channel reinforcement.
  • Bayesian models can also help predict pre-funnel influence, including non-converting journeys and offline media.

What Is Bayesian Modeling?

A Bayesian model helps you set and update expectations based on new evidence. Unlike traditional attribution, Bayesian methods can surface marketing impact pre-funnel.

Think weather forecasts: You start with what you know (like the season), then adjust your expectations based on new clues (like thunder). 

In marketing, Bayesian modelling weighs each channel’s influence based on how often it contributes to a sale, how recently it was seen, and how it interacts with other touchpoints.

Bayesian and causal models can overlap, but they’re not the same. Bayesian models estimate probability, like how likely something is based on data and prior beliefs. Causal models estimate what happens when something changes. The strongest marketing analytics use both: probabilistic thinking to handle uncertainty, and causal structure to guide decisions.

If you want to geek out a little more, Niall Oulton at PyMC Labs wrote an excellent piece on Medium about Bayesian Marketing Mix Modeling

It’s a great place to diver deeper. 

Why Bayesian Attribution Beats Last-Touch for B2B Marketing

Instead of guessing or oversimplifying, Bayesian modeling uses probability and real-world behavior to show what actually contributed to a sale, and how much.

Elena Jasper provides a good explanation using a research paper published in 2022, Bayesian Modeling of Marketing Attribution. It shows how impressions from multiple ads shape purchase probability over time. In a nutshell, too many impressions (especially from display or search) can actually reduce the chance of conversion.

Even more insightful, the model gives proper credit to owned and offline channels that traditional attribution ignores. 

Check out Elena’s Bayesian Attribution podcast episode.

Bayesian Models Show Influence

This is where things get interesting for brand builders.

Another study from 2015, The Impact of Brand Familiarity on Online and Offline Media Effectiveness, used a Bayesian Vector Autoregressive (BVAR) model to track synergy between media types. 

Here’s what they found:

  • Familiar brands get more value from “within-online synergy” (owned and paid media working together)
  • Unfamiliar brands benefit more from “cross-channel synergy” (digital and offline working together)

In other words, the value of your brand influences how effective your media is. So if you’re only looking at last-touch clicks, you’re missing the bigger picture. 

Bar chart showing stronger online synergy for familiar brands and stronger cross-channel synergy for unfamiliar brands.

This chart compares how different types of media synergy play out based on brand familiarity. Familiar brands benefit more from reinforcing messages within the same (online) channel. Unfamiliar brands get a bigger boost from cross-channel combinations, especially from pairing digital with offline.

  • Within-Online Synergy: How well paid and owned digital channels reinforce each other.
  • Cross-Channel Synergy: How well digital and traditional/offline channels combine.
  • Synergy Score: A regression-based measure of how much more effective two channels are together than separately.

SOURCE: The Impact of Brand Familiarity on Online and Offline Media Effectiveness (Pauwels et al., 2015), See Section 4.4, Table 3

Yes, It Also Helps You to See Pre-Funnel Impact Too

Bayesian models can also account for non-converting paths. That means they help you see how early exposures from media like TV, radio, podcasts, branded search, and earned media changed the likelihood of a purchase, even if the customer didn’t buy right away.

The ability to prove that your brand is being remembered is the holy grail of brand marketing.

Bar chart comparing credit given to last-touch vs. early exposures under different attribution models.

This chart compares how two attribution models assign credit for a conversion. Bayesian models are better suited for evaluating pre-funnel impact. They account for influence, not just transactions. 

These models don’t deliver hard proof. They provide probabilistic estimates, like how likely each channel or impression influenced conversion, even when no one clicks. It’s not deterministic, but it’s a far better approximation of real buyer behavior.

In a nutshell, memory and exposure matter, even when they don’t lead directly to a form fill.

When you start combining that with media decay rates and interaction effects, you finally have a way to show how long your brand-building efforts stick and when they fade.

SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022), See Section 4.2.3: “Interaction Effects”

Exponential decay curve showing how ad influence fades with time.

This chart shows how quickly a single ad loses its persuasive power. Influence fades exponentially, especially for short-lived channels like display or search. This is important for building brand reputation because a memorable first impression doesn’t last forever. Brand building isn’t one and done. 

This supports what the Sinha Bayesian attribution paper modeled: ad influence is not equal, and timing matters.

SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022). See Section 4.2.2: “Direct Effect”; Figure 5: Posterior distribution of ad decay parameters

Chart showing how conversion probability flattens after repeated ad exposures for SaaS vs. enterprise.

This chart shows saturation and how conversion probability builds with more ad impressions, then flattens out. Most SaaS GTM (self-serve, freemium, free trial) ramp up fast, but fatigue soon after (peaks around 12 impressions). Enterprise GTM builds more slowly, but needs more impressions to hit its ceiling (closer to 25 impressions).

Regardless of the model, impressions lose influence over time. That’s ad decay in action. But the number of impressions it takes to move the needle? That’s where most SaaS solutions and enterprise solutions part ways.

SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022), See Section 4.2.3: “Interaction Effects”; Figure 7: Negative interaction from high ad frequency. Real-world ad-to-pipeline benchmarks from WordStream, Databox, and SaaStr.

How to Get Started Without Boiling the Ocean

Most brands aren’t ready to run full Bayesian models. That’s OK.

It’s better to tackle the low-hanging fruit and build from there:

  • Track both converting and non-converting paths
  • Look for signal decay (how quickly clicks or views stop driving action)
  • Identify how owned, earned, and offline channels might contribute earlier than you think
  • Ask your data team or vendor if they support probabilistic models (some do; many fake it)

So if you’re only measuring what’s easy to measure, you’ll keep spending money in the wrong places and frustrating your exec team.

Measure This Not Just This
Decay of ad influence over time Last-click or last-touch only
Non-converting journeys Only converting paths
Cross-channel synergy Single-channel views
Confidence intervals in attribution Fixed attribution weights
Owned and offline media impact Only digital paid media

Final Thoughts

Like Causal models, Bayesian models are essential B2B marketing analytics. Relying on click-based attribution hides where budget is wasted and where your brand building is pulling weight.

Causal and Bayesian models aren’t mutually exclusive. Bayesian Structural Time Series, for example, blend both and help estimate impact while accounting for timing, media decay, and external variables.

These models and tools help us make smarter marketing decisions.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

The End of MQLs Part 5: Better Questions Produce Better Results

Most GTM teams ask the wrong questions. Learn how better questions lead to brand trust, better signals, and real pipeline in B2B tech.
May 6, 2025
|
5 min read

Many GTM teams still ask the wrong questions. It’s costing them trust, clarity, and pipeline. Questions centered on critical thinking, buying behavior, and brand recall help GTM teams shift from chasing short-term results to building long-term buyer confidence. If you want better outcomes, if you want to earn trust, ask better questions.

Takeaways

  • When pipelines run dry, panic can create short-term wins but also long-term waste.
  • Smarter questions help earn trust, create clarity, and grow pipeline.
  • The health of your brand reputation determines how sustainable your growth is.
  • Marketing needs to teach the business how marketing actually works.
  • Confidence and trust beats clicks, especially when buyers aren’t ready to buy.

Quick Recap: Parts 1–4

Missed the first 4 parts?

  • Part 1 explained why MQLs have never worked.
  • Part 2 defined real buying signals worth tracking.
  • Part 3 made the case for brand-building as risk mitigation.
  • Part 4 unearthed the time lag between marketing and revenue.

Part 5 gets into how GTM teams can stop the pursuit of more MQLs and focus on earning confidence and trust by asking better questions. 

Because if you’re still asking, “How many leads did we get this week?” you’re not solving for what’s stalling your growth.

GTM Teams Still Focus on the Wrong Things

When pipeline is soft or revenue is lagging behind, the same questions always come up:

  • “How many MQLs did we get?”
  • “Can we boost webinar registrations?”
  • “What if we offered a gift card for the demo?”
  • “Do we need to redesign the website?”

That’s leadgen panic in action. And it’s a recipe for bad decisions and looking at the wrong metrics.

Why? Because when we’re reactive we make knee-jerk decisions that put us into a spin cycle. 

Circular diagram showing the lead generation panic cycle: Leads go down, panic sets in, short-term tactics are used, vanity metrics spike, and leadership becomes frustrated. Visualizes the pattern B2B marketing teams often fall into.

Instead of solving real problems, we tend to default to short-term busy work and track vanity metrics that look good on dashboards. It’s the reason why so many funnel stages are filled with uninterested buyers and unqualified activity.

FYI, funnels are another outdated trap, but I digress. 

It’s best to step back and ask questions that put customers, buying behavior, and timing at the center.

You’ll get better answers for your marketing, your buyers, and your data.

Questions That Build Better Pipeline

If you’re marketing is broken, more MQLs won’t fix it. Asking better questions will. 

Here are a few worth considering:

From Panic To Clarity
“How do we get more leads?” “Are we attracting the right buyers or just clicks?”
“Why aren’t we getting meetings?” “What signals tell us the buying group is forming?”
“How can we fill the funnel faster?” “Where are we showing up in the buyer’s pre-funnel journey?”
“How do we increase conversions?” “What’s helping us earn trust and what’s confusing?”

You get the idea. 

Without knowing what’s causing poor marketing results, we remain forever trapped in the leadgen spin cycle. Always reacting. Always chasing our tails. 

Once we know why something is happening, the path forward becomes much clearer. It takes the pressure off so we can focus on moments that actually make an impact.

For more, see 12 Questions Every B2B Tech Marketer Should Ask In 2025.

Simple Isn’t Always Smart

In a recent LinkedIn post, Mark Stouse, CEO of Proof Causal Advisory, posed 12 questions to ask when there’s a push to make things “easy and simple” at work. 

Simplifying for clarity is valuable, sure, but oversimplifying complex solutions just to make them feel and sound “easy” is misleading.

Are we simplifying tasks to enhance understanding, or are we oversimplifying to the detriment of critical thinking? Oversimplification can mask underlying issues and distract us with vanity metrics like MQLs instead of focusing on genuine engagement and earning trust with our audience.

That distinction matters, especially in B2B tech where everyone looks the same, smells the same, and says the same things.

When we create MQL factories we lose nuance, oversimplify the process, and skip the hard thinking. And when that happens, we chase the wrong outcomes, ask the wrong questions, and miss what really drives revenue.

Asking better questions demonstrates critical thinking. It’s how mature marketing teams challenge default assumptions and make better decisions.

You don’t need to make your strategy easy. You need to make it effective.

Your Buyers Already Know What They Want

The 6sense 2024 Buyer Experience Report shows that 81% of buyers have already picked their vendor before they ever contact sales.

And they’re doing it quietly. In their own time. On their own terms.

Funnel diagram showing that over 80% of buyers have already picked a vendor before engaging in a sales conversation, based on 6sense research. Highlights that lack of brand awareness, not product quality, is often the reason vendors aren’t chosen.

So if your GTM team is focused only on MQLs, you’re way too late to the party.

Buyers want to feel confident. They seek clarity, proof that you’ve done this before. They put their feelers out months before you put them in your funnel. 

Ask yourself:

  • “When they search their category, do they see us?”
  • “When they ask their peers, do they mention us?”
  • “When they hit our site, do they feel understood?”
  • “What early indicators show us long-term marketing is working?”

Signals like that build pipeline you can trust. 

Brand Recall

One of the best questions any founder, CMO, or GTM team can ask is:

“What do we want to be remembered for?”

Because being remembered is more valuable than having a better widget.

You can have a great product and still lose. You can offer better services and still be ignored.

The brands that win are the ones that buyers remember. The ones that help, not pitch. They show up early. They stay visible.

They also know B2B buying is not a linear path to fast money. They understand what brand recall looks like.

Side-by-side comparison of a linear buying journey vs. an actual complex buyer journey. Left shows a simplified 2–4 week path from ad click to purchase. Right shows a nonlinear 2–4 year journey with touchpoints like blogs, demos, social media, PR, and peer influence.
What Brand Recall Looks Like

Final Thoughts

B2B Marketing’s fixation with MQLs gave us volume, not clarity. Like drugs, it rewarded short-term fixes and eroded brand health. 

If you want marketing to truly drive revenue you need to teach the business how it actually works. And that starts with better questions.

Kerry Cunningham also wrote an excellent piece about The De-Industrialization of B2B Marketing. It’s worth reading and sharing with your leaders. 

Thanks for following along this 5-part series on The End of MQLs. If you missed the others, catch them on Achim’s Razor

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

The End of MQLs Part 4: The Time Lag No One Talks About

Learn why B2B marketing often takes 6–18 months to show results, how time lag impacts metrics, and how leaders can reset expectations.
April 29, 2025
|
5 min read

B2B buyers don’t buy on your marketing timeline. This is why B2B marketing ROI shows up months after the campaign. The sales we generate today is because of the marketing we did 2-3 quarters ago (sometimes longer). Time lag impacts pipeline so much that sales teams often misjudge it. Resetting expectations is the only way to stop chasing fake deal velocity.

Takeaways

  • B2B buyers are probably moving slower than your CRM suggests.
  • Marketing results show up anywhere between 6–18 months later.
  • It’s human instinct to compress timelines and not see the full impact.
  • Clean CRM data and behavior tracking improve metrics.
  • Causal AI helps teams model lag, forecast ROI, and make better bets.

Quick Recap: Parts 1–3

We’ve already covered a lot of ground when it comes to the MQL trap:

  • Part 1 debunked MQLs. They track clicks, not buying intent.
  • Part 2 showed what real buying signals look like across groups, not individuals.
  • Part 3 explained how brand-building earns trust before buyers are ready.

Part 4 gets into why results take longer than anyone wants to admit and how to build a GTM strategy that respects the clock buyers are actually on.

Buyers Aren’t On Your Timeline

Even if you do everything right—launch the right campaign, reach the right people, hit the right message—you may not see anything in the pipeline for months.

Why?

Because buying is harder than selling. It’s slow, non-linear, and cautious.

Internal View Buyer Reality
We need pipeline this quarter. We’ll revisit this next fiscal year.
They downloaded a PDF! They’re researching for the future.
Follow up and book a demo ASAP. They’re still trying to get budget.
It's not working. Try something else. They’ll forget you before they’re ready.

Dreamdata’s analysis shows it takes at least 6 months for most B2B marketing efforts to show up in pipeline.

That means this quarter’s results happened because of the marketing you did 2-3 quarters ago. And it can take much longer if you sell complex and pricey solutions. 

One of the biggest mistakes B2B tech companies have made is expecting marketing to operate like a vending machine. 

“The ‘evidence’ for how the B2B GTM system operates has existed primarily in the minds of those who believed that they could make it a deterministic gumball machine. They adopted a ‘if this, then that’ sequencing mentality that does not reflect real life, and then they pounded audiences for 20 years to ‘generate demand.’ At no time was this about the customer. It was always about the hockeystick.”
 
Mark Stouse, CEO, Proof Causal Advisory

Even if marketing works, it won’t look like it worked, at least not right away.

For more, see: Purchasing Timelines in B2B

What Time Lag Actually Looks Like

Humans tend to exaggerate and distort events. It happens a lot with victims and witnesses of crime. What seems like seconds is actually minutes. 

And just like eyewitnesses underestimate how long something took, we tend to compress the buying timeline, focusing only on what we can see (the sales conversation), not what came before it.

Here’s a pattern you’ve probably seen:

  • Sales reps report a 30-day close.
  • Leadership assumes fast deal velocity.
  • Pipeline planning gets overly optimistic. 
  • Marketing gets raked over the coals for underperforming.

The reality is that the buyer started researching months before the first sales call.

Think of marketing like farming. You don’t plant seeds, dig them up a month later, and call it a day because they didn’t grow fast enough. Let your marketing take root.

Graph illustrating the time lag between marketing investment and realized revenue, showing where leadership typically gets impatient.
Why sales and marketing funnels feel broken

How Time Lag Impacts Results

Most CEOs and CFOs want to measure the impact of marketing right now. But early-stage marketing (especially brand work) doesn’t show up in pipeline for months.

Bar chart comparing sales cycle, marketing cycle, and purchasing cycle durations in B2B tech, highlighting where brand-building works but isn’t immediately visible.
The B2B Tech Buying Duration Disconnect

What to Do About It

Set expectations by tracking and reporting these metrics:

  • Branded Search tracks increases in direct traffic for your brand and indicates growing awareness.
  • Repeat Visits monitor return visits to your website and can signal sustained interest from potential buyers.
  • Group Buying Signals indicate multiple people from the same company are hitting your website, partners, reviews, etc. 
  • Category Recall assesses whether your brand comes to mind when customers think of your product category and reflects mental availability. 
  • Pipeline Impact with time lag built in. If you’re new to Causal AI, it essentially refers to advanced analytics that identify cause-and-effect relationships, helping you model pipeline impact while accounting for time lag. Tools like Proof Analytics can help you do that.

Important: Good forecasting starts with data integrity. If your CRM isn’t capturing the entire buying journey, your plan is built on guesses.

Above all, help educate your executive team on what early traction and long-term brand building looks like. Remind them to be patient. This isn’t a quick fix. 

Reset Expectations Around Buyer Behavior

B2B marketing ROI doesn’t happen in the same quarter, but marketers still struggle to measure ROI beyond six months, leading to pressure to focus on short-term metrics. 

That disconnect creates pressure to chase the wrong metrics like MQLs.

Do this instead:

  • Help your exec team understand why the time lag that is unique to your buying cycle will take this long. 
  • Model ROI scenarios over the long term. Tools like Proof Analytics or Recast can help you account for time lag.
  • Set realistic expectations early. For example, “We expect to see impact in 6–12 months. Here’s why...”

Final Thoughts

If the leadership team expects marketing to create pipeline in 90 days, they’re not wrong. They’re just misinformed, and it’s Marketing’s job to help them understand. 

B2B buyers have their own timelines, you have to respect the time lag that is unique to their buying cycle. 

You also have to give brand marketing time to take root and deliver long-term ROI. That’s how you stay relevant and earn trust.

Next week, Part 5 wraps up the series by getting into critical thinking and asking smarter questions.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!