Many GTM teams still track activity instead of impact. Bayesian models estimate what likely played a role. But they don’t explain what caused the result, or what might have happened if something changed. Causal models test and measure what contributed and why.
As mentioned in last week’s Razor, Bayesian models can help marketers get out of the last-touch attribution trap. They give us a way to estimate how likely it is that something contributed to a particular result.
But that’s not the same as knowing what caused the result and why.
Too many GTM teams still confuse probability with proof; correlation with causation. But more precision does not mean more truth.
Causal models answer a different question: what would have happened if we had done something else? That’s what your CFO wants answered. And it’s the one your current model can’t touch.
We need to ask better questions instead of defending bad math.
Much of this discussion was sparked by Mark Stouse on LinkedIn. He clarified a common misconception: that Bayesian modeling is the same as causal inference. It’s not. And that distinction is what we’re getting into.
The past is not prologue.
Mark Stouse, CEO, Proof Causal Advisory
Most attribution models are shortcuts, not models.
Rule-based. Last-touch. “Influenced” revenue. They’re easy to run. Easy to explain. But disconnected from real buying behavior.
Attribution measures who gets credit, not contribution.
Bayesian modeling doesn’t rely on a single touchpoint or fixed credit rule. It estimates how likely it is that something played a role, like a channel, sequence, or delay.
Bayesian models give you a better approximation of influence than rule-based methods. But they stop short of answering the causal question: What made this happen and why?
Most attribution models never get past basic association (correlation).
As Jacqueline Dooley explains in this MarTech article, rule-based methods don’t reflect how buying actually works. They measure what happened, not why it happened.
In other words, most GTM teams are still stuck in Level 1 of the causal ladder.
Bayesian models help you estimate whether something played a role. Not how much credit to assign.
That’s why they help measure things like brand recall, ad decay, and media saturation. They estimate influence but they don’t explain the cause.
Mark isn’t the only one pushing for clarity here.
Quentin Gallea wrote an excellent article on Medium that details how machine learning models are built to predict outcomes, not explain them. They’re correlation engines. And when teams mistake those outputs for causal insight, bad decisions follow.
If your model only shows what happened under existing conditions, it can’t tell you what would’ve happened if something changed. That’s the whole point of causal reasoning.
Causal AI tools like Proof Analytics (shown below) helps teams run “what if” scenarios at scale. It uses machine learning to handle the messiness of data with causal logic to explain what can actually make an impact.
Causal modeling shows what might have happened if you changed something, like timing, budget, message.
That’s the question your CFO is already asking.
As Mark pointed out, Bayesian models can’t answer that. Unless you impose a causal structure, they just update likelihoods based on what already occurred.
If you’re only predicting what’s likely under existing conditions, you’re stuck in correlation.
As mentioned, GTM dashboards show you what happened, like clicks. They don’t tell you what contributed to those clicks and why.
Bayesian models help you spot patterns.
That’s useful. But it’s not enough.
Why? Because even though Bayesian models are probabilistic, they don’t model counterfactuals unless a causal structure is added. They estimate likelihoods, not outcomes under different conditions.
If you want to know whether something made a difference (or what would’ve happened if you did it differently) you need a model that can test it.
So instead of more data, focus on the data you already have.
By the way, if you’ve never looked at buyer behavior through a statistical lens, Dale Harrison’s 10-part LinkedIn series on the NBD-Dirichlet model is worth bookmarking. This series will help you understand how buyers typically behave in a category:
Rule-based attribution like first/last-touch only tracks what happened. It doesn’t explain what mattered.
Bayesian modeling gets you closer by helping you see patterns. But it doesn’t explain cause.
Causal models let you test what could make an impact, what may not, and why.
And as Mark Stouse pointed out, this only works if you’re using a proper causal framework. Bayesian models can’t tell you what caused something unless that structure is built in.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!