Blog

Achim’s Razor

Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.

0 Articles
Strategy

Causal CMO #5: How to Prove GTM Value in the Boardroom

Causal AI is redefining how boards evaluate GTM value. Learn how CMOs can prove impact, defend spend, and lead with data under rising fiduciary pressure.
July 22, 2025
|
5 min read

Marketing teams that obsess over MTA, MQLs, CTR/CPL are under more pressure than ever. BS detectors are on high alert in the boardroom. In Part 5 of The Causal CMO, Mark Stouse outlines what the C-suite already expects. GTM leaders need to be fluent in interpreting proof, spend, and business acumen.

Takeaways

  • Fiduciary duty now applies to all officers, not just CEOs
  • Boards want to see real business impact, not activity metrics
  • GTM teams need to learn finance, speak cross-functionally, and test quietly
  • CAC and LTV are often fiction
  • The real blocker to change is courage

Proof Is Now Required 

As Mark and I already discussed, the rules changed in 2023

The Delaware Chancery Court’s 2023 ruling expanded fiduciary duty from CEOs and boards to all corporate officers, including CMOs, CROs, CDAOs, and other GTM leaders.

That means we’re now individually accountable for risk oversight and due diligence. Not just our intent. Our judgment too.

“This is changing the definition of the way business decisions are evaluated… What did you do to test that? What did you do to identify risk and remediate the risk?”

Boardroom expectations have shifted. They want marketing accountability, not activity metrics. If your GTM budget is still defended with correlation math, you’re going to lose the room.

Causal AI gives you something different: Proof. 

It tests what causes performance and why, how much it contributed, and what to do next.

It operates a lot like a GPS, recalculating your position in real time, suggesting alternate routes, and showing what could happen under different conditions.

What Boards Actually Want

Boards don’t want us to show them more dashboards. Think of it like the bridge of a ship. The Captain and First Officer

They want decision clarity:

  • What happened?
  • Why did it happen?
  • What should we do next?
  • What are our options?

Causal AI models cause and effect based on live conditions, not lagging indicators. It runs continuously. It adjusts to change. It simulates outcomes with real or synthetic data.

“GPS says, ‘I know where you are.’ You say where you want to go. It gives you a route. Then, if something changes—an accident, traffic—it reroutes. Causal AI works the same way.”

Mark shared a great story from one of his clients. During COVID, the finance team at Johnson Controls planned to cut marketing by 40%. But causal modeling showed how that would destroy revenue 1, 2, and 3 years out.

“The negative effects… were terrible. Awful. Like, profoundly wretched.”

Finance still made cuts, but only by 15%, not 40%. Because the data made the risk real.

Most CAC Models Are Fiction

A lot of B2B companies still treat CAC and LTV as truth. Mark didn’t mince words:

“CAC is a pro rata of some larger number. That pro rata is not real.”

And LTV?

“In the vast majority of cases, it’s completely made up.”

The bigger issue: CAC isn’t just a cost. It’s a form of debt. If you spend $250K chasing an RFP and don’t keep the client long enough to pay that back, you’re in the red. Period.

This mindset shift matters most for CMOs trying to earn budget.

“You’ve got to understand unit economics of improvement… how much money it takes to drive real causality in the market. That’s true CAC. Not the BS a lot of teams have been selling.”

What does that look like?

The chart below is a simulated example of a typical flat MTA pro rata model compared to a variable causal model.

Bar chart comparing fake pro-rata CAC vs. true causal CAC across three fictional B2B customers.

To be fair, it’s not always deception. It’s often desperation. Most teams are never given the tools to calculate real causality.

Marketing Must Step Up

CMOs say they want a seat at the table. But most still operate like support teams.

If you want credibility in the boardroom, act like it’s your business and your money.

“I became very good at interpreting marketing into the language of whoever I was talking to, like HR, Legal, Finance, the CEO. No marketing jargon. Just business terms.”

Boards fund systems that scale. That means reframing GTM as a system, not a series of tactics. It’s a mindset that requires critical thinking and letting go of outdated playbooks. 

“This is the difference between being seen as a business contributor and being a craft shop.”

Start learning finance. Take courses. Do sales training. Train your team. Speak the language of the business. That’s how you earn respect and influence decisions.

Build Proof. Then Tell the Story.

Part 4 covered how to start a skunkworks project

  • small budget
  • small team
  • no fanfare

In Part 5, Mark explains why the silence matters.

“You want to assemble your story of change. You won’t have that if you declare you’re doing this up front.”

The goal here is to earn sequential trust over time, not to be secretive. When people feel the improvement first, they’re far more likely to believe the explanation later.

“If they already believe it, they’ll accept the facts. If they hear the facts first, they’ll resist.”

So don’t lead with a deck. Don’t sell a vision. Build causal models behind the scenes. Learn what’s working. Adjust what’s not. Let the results speak. The key is to keep learning. 

Then, when the timing’s right, you can confidently walk into the boardroom with a better story and the data to back it up.

The Real Bottleneck Is Courage

The hardest part of this shift isn’t modeling. It’s having the guts to do the right thing instead of always doing things right.

“The biggest issue we all face is courage. The courage to act.”

Too many marketing leaders stay stuck because it’s safer. Even if nothing changes.

“If you’ve tried the old approach your whole career and it hasn’t worked… then you’ve got to change.”

And according to Mark, to be that change, you have to stop waiting for permission, stop hiding behind bad math, and start proving your worth quietly, confidently, and causally.

Final Thoughts

Navigating a business is kind of like flying a plane. Causal AI gives GTM teams the instruments they need to fly safely through volatility. It does more than “keep you in the air.” It helps you choose better paths when visibility disappears.

Implementing Causal AI into GTM requires a mindset shift. Marketing leaders will need to let go of legacy systems like MTA because the change is coming fast.

Here’s where to begin:

  1. Learn the language of finance
  2. Translate GTM into business outcomes
  3. Start a Skunk Works project and prove it works

If you want a seat at the table, you need to earn it and prove it. 

Missed the LinkedIn Live session? Rewatch Part 5.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Execution

Causal CMO #4: How to Operationalize Causal AI in GTM

Learn how to operationalize Causal AI across your GTM strategy. Shift from attribution to effectiveness, and build decision clarity before 2028.
July 15, 2025
|
5 min read

A lot of GTM teams are overloaded. New tech. New tools. New hype. All promising transformation, but rarely delivering clarity. In this recap of Part 4 of The Causal CMO, Mark Stouse explains why operationalizing Causal AI isn’t just about buying another tool. It first requires a mindset shift. A hard reset on how GTM teams define risk, read signals, and move forward.

Takeaways

  • Causal AI won’t become self-serve until teams are ready to use it
  • Dashboards tell you what happened, Causal AI tells you why it happened
  • Externalities like time lag and volatility are rarely accounted for in GTM
  • You can’t get to efficiency until you prove effectiveness
  • Counterfactuals are the gateway to clarity without the data fight

Mindset Shift Before Metrics

Before you operationalize anything, you need to think differently. 

The first step isn’t modeling or tooling. It’s dropping the need to be right. 

Causal AI is only effective if you’re willing to look at what’s actually happening, not what you hoped would happen. 

So do you want to be right? Or do you want to be effective?

This is a key distinction, especially for Go-To-Market teams. Instead of constantly trying to prove they’re right, they should ask what they need to do next.

That shift is already starting to happen.

Proof Analytics, for example, no longer looks like a traditional SaaS product. Most of Proof’s clients now rely on software-enabled services because self-serve just doesn’t work when teams are overwhelmed. 

It’s not a tech problem. It’s a saturation problem.

“Teams today are saturated like ground that’s been rained on for too long. They can’t absorb anything new. The water just runs off.”

Too many GTM teams are stuck on this treadmill. They’re still chasing efficiency because it’s easier to cut cost than drive growth. But efficiency without proven effectiveness is meaningless.

And that’s where Causal AI comes in.

The Problem With Attribution Isn’t Just the Model

As we discussed in Part 1, Multitouch Attribution (MTA) assumes linearity and doesn’t account for time lag. It only focuses on the dots, not the lines in between.

Dashboards typically treat data like a mirror. But as Mark pointed out, data only reflects the past, and past is not prologue. Like crude oil, it’s useless until refined.

“There is no intrinsic value in data. Only in what it gets refined into.”

Mark shared a story from a meeting where a CIO showed how easy it was to manipulate attribution weights. Then he had various leaders at the table do the same. Same data, four outcomes. Each one reflected a different bias.

Guess who had the least credibility?

Yup. Marketing. The CIO said to them:

“Of everyone in the room, you arguably have the most bias. Your outcome is dead last in terms of credibility.”

Causal AI mitigates gaming the system. It tests patterns for causal relevance and recalibrates in real time. If the forecast starts to degrade, it tells you what to do next.

It doesn’t care if the news is good or bad. It just tells the (inconvenient) truth. 

Market Conditions Come First

Before the model even begins, Mark’s team maps the external environment. They model the headwinds and tailwinds first. Only then do they plug in what the company is doing.

This is where most teams fall short.

Too often GTM is treated like an isolated system. But it’s not. It’s subject to risk, time lag, and external forces that marketers rarely model. And it shows.

According to Mark, the effectiveness of B2B GTM spend has dropped from 75% to just above 50% since 2018. That’s not a tactics problem. It’s a market awareness problem.

“The average B2B team is frozen in their perspective. They’re not thinking about the externalities unless it gets so bad they can’t ignore it.”

The result? Poor decisions, reactive guidance, missed opportunities. And an inability to plan for value because no one knows where in the calendar to look for it.

Counterfactuals Make It Real

One of the most powerful features of Causal AI is the ability to model counterfactuals. What would happen if we made a different decision?

Until recently, this required expensive, synthetic data. Now, GenAI tools make it accessible. With a detailed enough scenario prompt, teams can simulate outcomes, measure impact, and prioritize programs before spending a dime.

It’s like an A/B test for strategy. No need to touch real data. No risk of tripping legal wires. Just clarity.

“Most stealth efforts start here. The counterfactual model shows what’s probably happening. Then you go get the real data to prove it.”

It’s also the easiest way to build internal buy-in. Teams can explore alternatives without asking other departments for access or permission.

Start With Skunk Works

You don’t need a top-down mandate to operationalize Causal AI. In fact, Mark recommends the opposite.

Start small. Keep it quiet. Don’t even call it a transformation.

“Carve off a small budget and a couple of people. Model and learn for 9 to 12 months. Don’t say anything. Just execute.”

As the team starts learning and adjusting, people will notice. They’ll feel the shift before they understand it. Then, when the time is right, you explain how it happened.

“If people feel the improvement first, they’ll accept the facts. If they hear the facts first, they’ll resist.”

There’s no manipulation in this. It’s psychology. Let results speak before you tell the story. It’s like asking for forgiveness instead of permission. 

Causal AI Is Not Departmental

Causal AI is not a marketing or revenue tool. It’s a business system. 

Mark says the best clients are already thinking in terms of enterprise models. Finance teams often lead the adoption. They use causal modeling transparently across departments to see what’s working for the business as a whole.

And leadgen is not the goal. The board doesn’t care about leads. They care about cash flow, growth, and risk. In other words, bigger deals, faster deals, and more of them. Causal AI connects those dots and the lines in between.

“You can’t get to efficiency unless you know if it’s effective.”

Final Thoughts

Effectiveness is not a tactic. It’s a lens.

Causal AI doesn’t ask if you were right. It helps you get better at seeing possibilities and becoming more effective. 

We’ll explore that further in Part 5 as we dig into investment decisions and boardroom conversations.

Until then, ask yourself:

Are you willing to be wrong long enough to get it right?

Missed the LinkedIn Live session? Rewatch Part 4.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Why AI Adoption Still Frustrates Most Managers (And What Helps)

Tried automating with AI? We did too. Here’s what broke, what worked, and why most tools still miss the mark for real-world business teams.
July 8, 2025
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO helping various B2B GTM teams with AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

Most AI automation tools look easy in demo videos. But when we tried building a simple system to summarize calls and send reports, reality hit hard: clunky UIs, unexpected limitations, and lots of wasted time. Still, when used right (especially for early prototyping), AI can be a breakthrough in team alignment. This article shares what worked, what didn’t, and where we go from here.

Takeaways

  • AI agents today are still scripts, not true automation. The promise of autonomous workflows is overstated.
  • Tools like make.com and ChatGPT require technical fluency. Setup is far more complex than the marketing suggests.
  • Security and data access are non-trivial blockers. Granting full email or Drive access raises concerns many don’t consider.
  • AI is great for prototyping and alignment. “Vibe-coding” can quickly resolve communication gaps.
  • The next frontier is usability. AI won’t go mainstream until the interfaces catch up to user needs.

AI Hype vs. Reality

So, we recently jumped into the whole AI automation thing. The goal was simple: use make.com to build something that would summarize weekly Google Meet calls and email a neat report. 

Easy, right? 

All those flashy Instagram and LinkedIn videos had led us to believe it would be. We've worked with Zapier before, and this would be a similar, straightforward experience, right?

Not so fast.

First hurdle: Google Meet doesn’t just hand over transcripts with an API. Nope. They’re stuck in Google Docs in Drive. You have to give make.com access to a specific folder. Then came a “simple” filter for recent documents. Simple, unless you don’t know the exact code. The “intuitive” interface felt more like a maze when what was sorely needed was real control.

Then, to send the summary via Gmail, you have to link your entire account to make.com. That can make anyone uneasy, to say the least. Finally, setting up ChatGPT with API keys and managing credits wasn’t hard on its own, but put it all together, and it became a bigger headache than expected.

The make.com AI assistant, supposedly there to help, burned through free credits like kindling while trying to resolve a basic filter issue. The frustration wasn’t with the idea; it was with how hard it was to “make” it work. After an hour wrestling with the interface, it was clear that our time was better spent elsewhere.

AI Automation Workflow Breakdown: the intended automation setup

AI automation workflow showing steps from Google Meet to Gmail, with callouts for API limitations, make.com setup hurdles, and privacy concerns.

Stephen Klein, CEO of Curiouser.AI, hits the nail on the head here in this LinkedIn post. He argues that most of the “agentic AI” buzz is just that. Buzz. 

Today’s AI “agents” are often just scripts, not independent thinkers. We’re years away from true autonomous AI. 

Klein is right. Businesses risk chasing inflated promises, throwing money at “Hype-as-a-Service” instead of real solutions.

A Glimmer of Hope

Despite the roadblocks, there is reason to be optimistic. A recent “vibe-coding” experiment (quickly mocking up a concept using AI tools without overengineering) is a good example. 

For non-technical managers leading software teams, you can use it to quickly build a basic idea prototype. We tossed the code, sure, but it completely changed how the team communicated. It cut down on all the detailed upfront planning we usually do. 

Could we build a full, production-ready solution with vibe-coding today? Probably not. But the immediate wins (clear talks, faster decisions, smoother development) were huge.

One time, we were stuck on a feature. Everyone had a different idea of what “simple” meant. We spent hours in meetings, just talking in circles. With vibe-coding, I cobbled together a rough version in an hour. We put it on the screen, and suddenly, everyone saw the same thing. The room went from confused murmurs to “Oh, I get it!” in seconds. It was a game-changer for clarity.
 
Gerard Pietrykiewicz

Before Vibe-Coding After Vibe-Coding
Confusion over what “simple” meant Clear, shared understanding
Long meetings, vague ideas Fast alignment via prototype
No one on the same page Instant “Oh, I get it” moment

This experience shows one clear truth: AI tools are harder to use than the marketing videos suggest. And no, we’re not ready to fire all our developers. But those who stick with it, who push past the early bumps and use these tools wisely, will find a real edge. 

New tech always has growing pains. Think about how easy GenAI has made things. It went from complex APIs to something almost anyone could use overnight.

Moving Forward with Clarity, Not Chaos

Yes, Stephen Klein is right to warn us about blindly following the hype. But his warnings shouldn’t stop us from trying things out. They should guide us to explore with care and common sense. 

As leaders, our challenge is to bridge limits by pushing for simpler, more intuitive solutions. Maybe AI itself should design user interfaces that actually make sense for managers, not just developers.

It reminds me of the early days of the internet. Back in the 1980s, it was powerful, but only for those who understood complex commands. Then along came the web browser (anyone remember Netscape?), a simple interface that opened the World Wide Web to everyone. AI needs its browser moment.
 
Achim Klor

Final Thoughts

Like any new tech, AI tools will continue to trip us up. But every experiment makes us better. The more we test, the more likely we are to build the future we want, not just buy into the one being sold.

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Causal CMO #3: Causal AI is the GPS for Your GTM, Get Ready for 2028

Causal AI isn’t just tech. It shows what really drives GTM results. Learn how to move beyond attribution and start building your causal advantage before 2028.
June 25, 2025
|
5 min read

Many of us are still in the early stages of AI adoption, experimenting, testing, and trying to make sense of how it fits. But the pressure to move beyond pattern-matching is building. In Part 3 of The Causal CMO, Mark Stouse explains that Causal AI isn’t just a tech upgrade—it’s a new layer of accountability. It recalibrates forecasts, reveals the impact of GTM decisions, and removes the guesswork from budget conversations. This article outlines what GTM teams need to prepare for as Causal AI becomes mainstream. 

Takeaways

  • Causal AI shows you what actually caused your business outcomes.
  • CEOs already know that data from attribution models can be gamed.
  • You don’t need perfect data to get started.
  • AI doesn’t kill creativity or purpose. It makes both stronger.
  • By 2028, causal tools will make it easier to call your bluff.

Learn What to Do Next

One of Mark’s biggest points was to embrace being wrong. Be effective instead of being right. If GTM teams want to get ahead, they need to let go of trying to control everything. 

We’re already seeing this pressure hit big consulting firms. Demand for their AI services is off the charts, but clients aren’t paying what they used to. It’s forcing layoffs because staff aren’t fluent in AI. This is what the 2028 reckoning looks like in real time: not a tech crisis, but a credibility one. 

Causal AI doesn’t care about any of this. It calls things as they are. It adjusts automatically based on what’s going on around you. 

Mark calls it a GPS for your business. 

“If things start to degrade in the forecast, it tells you what to do to get back on track. That’s why we modeled Proof Analytics after GPS.” 

Unlike forecasting the weather, which looks at past patterns, Causal AI tools like Proof Analytics measure cause and effect in real time, based on current conditions and the actual levers you can pull. 

Proof Analytics user interface showing a real-time scenario analysis of online sales forecasts across multiple inputs and external factors.
Proof Analytics showing four potential causal what-if scenarios

Not All AI Is the Same

Mark outlined four categories of AI:

The leap from correlation to causality is the break point. GTM teams stuck in attribution are falling behind. Those preparing for Causal AI will be ready when it becomes standard.

“We have about three years to cross the river. If you don’t, it’s going to be very hard after 2028.”

Unlike attribution modeling, which relies on correlation and weighting, Causal AI directly isolates impact.

The Myth of Control

Causal AI forces a choice: keep pretending we’re in control, or start navigating with truth.

Mark compared what we control across different grading scales as illustrated below.

Perceived Control Across Different Grading Scales

Bar chart comparing average control across academic, baseball, and surfing performance scales. Academic shows 90% in control; baseball, 35%; surfing, 6%.

In each grading scale, what counts as success depends on how much is actually in our control.

In school, we control most of our grade because it’s based on the work we hand in. In baseball, a .350 hitter strikes out 2 out of 3 times but makes the Hall of Fame. In surfing, a world-class pro may wipe out 94% of the time. In war or pandemics, almost nothing is in your control.

“Business is more like baseball. If we start grading it that way, we end up with a lot more truth.”

Same goes for marketing. As much as 70% of GTM performance is driven by external factors.

“If you don’t know what the currents are, you won’t know how to steer the ship.”

That’s why your job isn’t to be perfect. It’s to reroute. 

Causal AI detects shifts and tells you what to do. It zooms from big picture to ground level, depending on what the decision calls for.

Progress. Not perfection.

Tech Doesn’t Kill Purpose. It Reveals It.

There’s a quiet fear around AI, especially in creative and strategic roles. The idea that if a machine can see something you can’t, your work might not matter. That’s just not true. 

If the tools are properly learned and configured, they amplify creativity. 

Take GenAI, for example. If you dive into ChatGPT without training it on facts or setting expectations, it’s like hiring an intern and never giving them a clear job description. 

The problem isn’t the tech.

“If you pick up the tool, you have purpose. That’s not loss. That’s awareness.”

Mark also shared how his son, a private chef, uses GenAI in the background while cooking. It helps plan menus, tailor preferences, and provide real-time input. It doesn’t replace his job. It makes him better at it.

It’s like that for marketers too. 

“Marketing is a multiplier. It doesn’t need shared revenue credit. It needs causal proof of lift.”

Even before the first Industrial Revolution, tech has been a multiplier. It has expanded human capability, freeing people up to focus on innovating and creating. If you’re still defending multi-touch attribution (MTA), it’s not your data that’s outdated. It’s your mindset. 

What Happens When AI Calls Your Bluff?

You can’t hide behind attribution dashboards anymore.

During interviews with Fortune 2000 firms, Mark uncovered a common thread: C-suites aren’t frustrated by skill gaps. They’re frustrated by teams who can’t explain impact.

“They come up with total BS programs to justify spend. Do they not know that we know this is stupid?”
 
Fortune 2000 CEO

Attribution models are weighted and easy to manipulate. Change the weights, change the story. Everyone knows this. That's why MTA charts get ignored in the boardroom.

Causal models run continuously. They adjust to change. They reroute you when conditions shift. Causal AI works like a GPS. It gives teams the heads-up they need to adapt.

The Treadmill Is Breaking

A lot of GTM teams are stuck on a productivity treadmill. Budgets are cut. Expectations stay high. Nobody knows what’s actually working.

AI will expose that. Early on, it will cut 30-40% of activities. Not because it’s ruthless, but because that activity wasn't creating impact in the first place.

“We’ve just always been doing it. With AI, everybody will know.”

In other words, if you’re not using AI with a causal lens, you’re optimizing noise.

Final Thoughts

My conversation with Mark made a few things very clear:

  • AI isn’t replacing you. But if you ignore it, someone else will outperform you.
  • Attribution logic can’t handle lag, volatility, or context. Causal AI can.
  • GTM teams have until 2028 to make the shift or risk falling behind.

In Part 4, we’ll talk about what it looks like to operationalize this shift.

Stay tuned.

Missed the LinkedIn Live session? Rewatch Part 3.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Causal CMO #2: How to Move From Patterns to Proof

Pattern-based models like attribution help predict behavior, but they don’t explain cause. This article shows how causal AI reframes GTM, risk, and performance.
June 18, 2025
|
5 min read

Models like Bayesian and NBD-Dirichlet are powerful ways to predict human behavior. But they don’t explain why things happen. They can spot patterns. They don’t prove cause. In this recap of Part 2 of The Causal CMO, Mark Stouse explains the shift from pattern-based models to causal inference and why that shift matters now more than ever for GTM teams.

Takeaways

  • Bayesian models show patterns, not causes.
  • NBD-Dirichlet assumes behavior won’t change.
  • Most GTM teams optimize for efficiency, not effectiveness.
  • Marketing only multiplies sales when sales is working.
  • Attribution hides risk. Causal models expose it.

“Why” is Not a Nice-To-Have

One of the things Mark started with is an age-old question we’ve always tried to answer in business:

“Why things happen is the number one thing that the scientific method seeks to understand.”

Bayesian models were a step in that direction. But the Bayes’ Theorem is 300 years old. And they’re predictive. They can’t tell us what caused what and why. 

What they are very good at is telling us the probability that something is happening.

“If you see smoke, a Bayesian model helps update the probability there’s a fire. But it won’t tell you what caused the smoke.”

Same goes for NBD-Dirichlet. It’s great for describing past behavior. It can even predict short-term purchasing patterns.

You can see this in action on many e-commerce platforms. Amazon uses NDB-Dirichlet to model the probability of repeat purchases. If you buy a certain product, for example, you almost always see a “You May Also Like” CTA. 

This kind of modeling assumes that buyer behavior doesn’t change much over time. 

But human beings are irrational. We choose A today and B tomorrow, depending on how we feel. And in B2B tech, with its layers of procurement bureaucracy, stakeholders, and decision time lag… well, you see where I’m going with this. 

The Shift from Correlation to Causation

Most GTM systems in place today are designed to support correlation-based marketing, not causal decision-making.

They’re pattern matchers. Attribution tools. Regression models.

None of them explain cause and effect. They were built to scale performance efficiency, not understand impact like cost-effectiveness.

Marketing is just as much to blame as anyone. But the rot began with deterministic thinking at the leadership level. 

“Most of the bad info came from founders and VCs. They wanted deterministic systems. The idea was simple: put a quarter in, get a lead out.”

That kind of worked for a while, when interest rates were low, uncertainty was minimal, and pipelines moved fast.

But once volatility hit (time lag, market noise, internal complexity), those patterns fall apart.

Causal inference doesn’t just show you a pattern. It tests whether that pattern causes anything meaningful to happen.

And it does it continuously in real-time.

That’s the power behind causal AI.

Flowchart showing the evolution of analytics models: Descriptive, Predictive, Correlation-Based, Causal Inference, and Causal AI, each linked to a guiding question.

Why Marketing Effectiveness Has Collapsed

One of the biggest reasons GTM performance has tanked in the last two decades is that most GTM teams still operate like the environment hasn’t changed.

“70% of the world is stuff you don’t control. And most marketers don’t even account for it.”

The effectiveness of GTM teams has dropped off a cliff, not because marketers suddenly got worse, but because the headwinds got stronger and faster.

  • Deals are taking longer.
  • Budgets are getting slashed.
  • CFOs are pulling back anything they can’t defend.

It’s a full-blown marketing effectiveness collapse, clearly visible in 2025. 

And yet, everyone still keeps trying to optimize for efficiency. Perhaps it’s because we blindly believe we are already effective. 

“You can’t be efficient until you’re effective.”

This is an important reminder: Marketing is a non-linear multiplier. Sales is a linear value creator. 

For the past 25 years, we’ve been trying to force marketing to abide by Sales’s linear outcomes. That’s no different than forcing a square peg through a round hole. 

In a causal model, you can calculate the lift marketing creates while Sales is executing.

If Sales underperforms, Marketing’s lift is zero. If Sales is kicking butt, marketing can multiply Sales efficiency by 5x and Sales effectiveness by 8x. 

That’s not for debate. It’s proven math.

Visual showing the multiplier effect of marketing on sales: 8x more effective and 5x more efficient, explaining that GTM investments are multiplicative, not additive.

Sadly, that multiplier logic doesn’t show up in attribution because attribution is correlation. 

It’s only visible in causal inference.

4 Types of AI

Mark explained the four types of AI in play today. Only one explains “why.”

  1. Generative AI is what most people are experimenting with
  2. Analytical AI is correlation-based pattern matching
  3. Causal AI models cause and effect
  4. Autonomous AI is not real yet (mostly hype)

Most GTM teams are stuck between the first two. 

They’re still optimizing a traditional sales and marketing funnel with pattern-matching tools that can’t distinguish signal from noise. In other words, transactional tactics. 

Worse still, they’re getting excited about autonomous agents that “do work for you,” without asking if the work being done is even useful.

“Agentic AI without causal logic is just automation with lipstick.”

Causal AI and the CFO

One of the most overlooked shifts in GTM accountability is that causal models are increasingly being used by finance.

“FP&A and ops teams are going to be the ones evaluating GTM performance. Not marketing itself.”

This shift is already underway. It’s part of a larger response to Delaware’s expanded fiduciary rules.

“All officers now carry personal liability if shareholder risk isn’t managed responsibly.”

Which means random budget cuts, especially in marketing, are going to get harder to justify.

Causal AI gives CFOs the scalpel they’ve needed for years. It helps them decide what to cut, and more importantly, what not to cut. This is how CFOs use causal AI for GTM decisions.

Causal AI is like a CRM for cause and effect that updates continuously and informs real decisions in real time.

GPS Can’t Help If You’re Blindfolded

One of the analogies Mark uses when explaining Causal AI is that it’s like a GPS. It doesn’t promise precision. But it helps you avoid collisions and reroute when the road changes.

“The route that worked last quarter might not work today. The conditions changed. The terrain shifted. And if you’re not paying attention, you’re going to hit something.”

And it’s not just rerouting. 

These systems simulate what’s likely to happen next based on shifting inputs, so you can course-correct before problems hit.

So if you’re running models designed for deterministic systems, you’re essentially driving blind.

Final Thoughts

The hype around AI isn’t going away. Neither is the pressure to “cut costs,” especially in GTM.

A lot of budget cuts these days are based on correlation, on what appears to be contributing. But contribution isn’t the same as causation. Just because a channel shows up in the report doesn’t mean it drove the outcome.

And if you slash the wrong input, you could kill something that was actually working. That’s the long-term damage most teams don’t see coming (or keep missing). 

“AI is going to become a lie detector. It will show where the correlation falls apart.”

That’s the shift GTM leaders need to prepare for.

Fast.

Missed Part 2? Rewatch it on LinkedIn.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

Causal CMO #1: Attribution still dominates GTM. That’s a problem.

GTM teams still rely on attribution. That’s a problem. Learn how causality models reduce risk, reveal drivers, and rebuild CMO credibility.
June 10, 2025
|
5 min read

Many GTM teams continue to rely on correlation to justify decisions. It’s an ongoing problem. In this kickoff to our Causal CMO Series, Mark Stouse and I cover why marketing attribution continues to fail, how internal data keeps getting “engineered”, and how new legal rulings will put GTM leaders directly in the line of fire.

Takeaways

  • Correlation can’t explain real outcomes. Causality can.
  • Internal GTM data is often manipulated under pressure. Not malicious, just human.
  • Delaware rulings make all officers, including CMOs, accountable for data quality.
  • Attribution logic like first, last, and multi-touch is correlation, not cause.
  • Causal models start from business outcomes and force teams to reverse-engineer what actually moved the needle.

Welcome to the Causal CMO

Mark Stouse and I kicked off the first session with a direct conversation about how marketing attribution models still hold GTM teams back from reporting real-world buyer behavior. Too many are still stuck in correlation, and it’s costing them. 

Correlation feels safe. It’s anything but.

Marketers look for patterns. It’s what we've been trained to do. But in complex, time-lagged buying cycles like in B2B tech, correlation doesn’t tell you anything useful, like what actually caused the deal to close (or not).

Tom Fishburne's cartoon showing a business meeting where a team misinterprets correlation between sales and shaved heads, humorously illustrating flawed logic in marketing data analysis.
Source: Marketoonist, by Tom Fishburne

Mark made a great point: correlation is binary. It either exists or it doesn’t. That makes it easy for humans to understand, which is why we gravitate toward it. But it’s not how markets work. It’s not how buyers behave. It’s not what happens in the real world. 

Causality tells you what actually led to the outcome. It’s multidimensional. It accounts for context, time lag, headwinds and tailwinds, and everything else correlation ignores. That’s why it’s harder to fake and why it’s so valuable.

Many GTM teams still lean on correlation because it’s faster and easier to defend, especially under pressure. As Mark pointed out, correlation-based patterns are easy to manipulate. If the data doesn’t match the story you need to tell, you can tweak the inputs until it does. That’s why attribution charts and dashboards persist: they give the illusion of precision without exposing the actual drivers. It looks clean. It feels controllable. But it’s a shortcut that hides the real story.

An intuitive decision is either really right or really wrong. No leadership team can afford that kind of spread.

Your data is lying to you.

Mark shared a case where a fraud detection tool scoured over 14 years of CRM records. More than two-thirds of the data was found to be “engineered.”

It wasn’t one bad actor. It was many people over many years, under tremendous pressure to prove their seat at the table, while slowly reshaping the story to fit what they needed to show.

People use data to defend themselves, not to learn.

This kind of manipulation is hard to catch with correlative tools. Causal systems, on the other hand, make it obvious when something doesn’t add up. It’s unavoidable. 

Delaware changed the rules. CMOs are now on the hook.

Unlike Sarbanes-Oxley, which took years to pass and gave leadership time to prepare, the Delaware rulings came quickly and without warning. Mark called it one of the biggest blind spots in recent corporate memory. 

The 2023 McDonald’s case expanded fiduciary oversight beyond the boardroom. Now every officer (CMOs included) is legally accountable for the quality and integrity of business data.

The writing’s on the wall. If you’re not governing your data, you’re exposed.

Data quality is now the number one trigger for shareholder lawsuits. CRM systems are full of data manipulation and missing governance. Lawyers know it’s an easy audit. If your GTM data can’t hold up under causal scrutiny, you’re exposed.

Attribution isn’t just flawed. It’s obsolete.

First-touch. Last-touch. MTA. Even Bayesian models. They’re all correlative. They’re all easy to manipulate. And they all fall apart under scrutiny.

Mark told the story of a meeting where a CIO changed the MTA weightings mid-call, then had someone else change them again. Marketing freaked out, but had no causal rationale to defend their weightings. The numbers were all made up.

If your model changes when you tweak the sliders, it’s not a model. It’s a toy.

Jon Miller, founder of Marketo and Engagio, recently said himself that Attribution is BS

Respect to Jon for saying it out loud. That’s a bit like the first step in the 12-step program: admitting you were wrong. And that’s where every CMO still holding onto attribution logic has to start. 

Mark followed up with his own take on why causality is quickly becoming the standard.

Both are worth reading. 

Marketing is probabilistic, not deterministic.

Causal models don’t promise certainty. They help you understand what likely led to an outcome, accounting for what you can and cannot control. 

Mark compared it to throwing a rock off a building. Gravity ensures it will hit the ground every time, but where it lands and what it hits is a different question, especially when you consider things like time of day, weather, etc.

Two-panel black-and-white cartoon showing a rock falling from a building—during the day toward a crowd of pedestrians, and at night onto an empty street—highlighting how context changes the outcome of the same action.
Gravity is the constant. Everything else is a variable. (Image generated by AI)

It’s the same with marketing. You know your efforts will have an effect. What you need to model is the direction, magnitude, and time lag.

Start with outcomes. Work backwards.

The shift to causality has nothing to do with better math. As Mark said, if a vendor’s pitch is built on “new math,” you should run. The math already exists, and it works. 

What matters is asking the right questions. Don’t start with your data. Start with the outcome you care about.

  • What moved deal velocity? 
  • What increased the average contract value? 
  • What pulled new buyers into your sales process?
Start with the outcome. Work backwards. See if the data supports it.

That shift exposes where the real drivers are. And it resets expectations for performance. 

Mark compared it to baseball. If you hit .350 in the majors, you’re a Hall of Famer, even though you failed two-thirds of the time. Baseball is full of external variables players can’t control. 

Side-by-side comparison of Babe Ruth’s .342 batting average with a failing test score, showing how success in high-variance environments like baseball differs from academic grading.
Marketing is more like baseball than academia.

In academia, it’s the opposite. Most of your GPA is in your control. 

GTM should be judged like baseball but it’s not. Markets are messy. Nothing is certain. Causal modeling reflects that uncertainty by showing you what data you’re missing or misreading. 

But traditional marketing metrics like attribution expect certainty. Marketers are held to a similar standard as academia. This makes zero sense given how uncertain markets are. 

And therein lies the problem, and it is a critical insight for GTM teams. We’re trying to make sense of uncertainty using tools that assume predictability. The classic square peg through a round hole.

Attribution tools weren’t designed for complexity or context. They were built to assign credit. They don’t help GTM teams. They polarize them.

Final Thoughts

Mark explained a few key things every GTM team needs to plan for, including how correlation fails, why causality matters, what legal risk CMOs now face, and how to move beyond attribution logic in B2B GTM.

In Part 2, we’ll get into the mechanics. What causal models actually look like. How to map time lag and external variables. And how to build something your CFO will trust.

Missed the session? Watch it on LinkedIn here.

In the meantime, here’s a quick FAQ to clarify the core ideas, especially if you're new to the conversation or want to share this with your team.

  • What’s the difference between correlation and causality in marketing?
    Correlation shows when two things happen together. Causality shows what actually drove the result, taking time lag, context, and other variables into account.
  • Why are attribution models unreliable?
    Because they’re based on correlation. They’re easy to manipulate, and they rarely reflect what actually influenced the outcome.
  • What’s the legal risk for CMOs in 2025?
    Delaware rulings now hold all corporate officers, including CMOs, legally accountable for data quality. Shareholder lawsuits are already targeting flawed CRM data.
  • What’s a better alternative to MTA or last-touch models?
    Causal modeling. It starts from outcomes and works backwards to isolate what actually moved the needle.
  • Do I need better data to start?
    Not necessarily. You need better questions. Causal models help you figure out what data matters and where the gaps are.

If your dashboard still runs on attribution logic, this is your chance to change it.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!