Your Stack Is Not the Fix. Your Model Is.

Two weeks ago, a CFO told Mark Stouse, “My go-to-market operation is bankrupt.”

Not struggling. Not underperforming. 

Bankrupt.

This is where a lot of B2B tech companies quietly are right now. CAC keeps climbing. Deal size is down. Cycles are longer. And the default response is to replace tools, rebuild the dashboard, or layer AI on top of it all.

During our recent Causal CMO chat, Mark laid out a simple but uncomfortable truth: the tools aren’t the problem. The GTM model underneath them is. 

And that’s a huge reason why B2B GTM remains stuck.

Takeaways

  • Your martech stack reflects your assumptions. If results are sliding, audit your model before you buy more software.
  • Being wrong isn’t the problem. Staying wrong is.
  • AI on top of bad logic scales the problem.
  • GTM teams that can’t explain why things stopped working are creating a career-level risk.

Software is codified belief

All software, including your entire martech stack, is codified logic. It embeds assumptions about how buying works, how leads behave, and what predicts revenue.

“All software is codified information. It represents the way we learn and process information. And that means it embodied the logic that went into B2B marketing and go-to-market, starting in about 2000-ish.”

That’s the deterministic gumball machine. You put a quarter in. A gumball comes out.

Gemini generated retro pop-art illustration of a gumball machine filled with MQL-labeled gumballs, with a hand inserting a 25-cent coin and the word CLINK — representing the deterministic B2B lead generation model.

Apply that to B2B revenue and you get: fill the top of the funnel, closed deals come out the bottom.

The problem is that buying is human behavior. Not thermodynamics. Treating it as deterministic was always wrong. It worked long enough that nobody had to face it. Now they do.

The reality is marketing has always been probabilistic. It has never been a linear deterministic process

The wrong model, at scale

Here’s what happens when you build technology on top of flawed assumptions like “gumball” logic:

“Technology is a point of leverage for human activity. If you’re wrong in your logic, automating it for scale just means you create more crap.” 

That’s why a lot of teams feel worse after modernizing their stack. More automation. More sequences. More scoring. More dashboards. More “confidence”. Less truth.

The tools worked exactly as designed. The design was the problem.

That design wasn't invented by marketers. It came from VC boards and investors who wanted predictability and a narrative they could control. 

“The idea of a deterministic go-to-market machine originated with VCs.” 

A lot of what GTM teams are being blamed for now was baked in from the top. The problem is conditions changed. For example, “no decision” now kills more deals than competitors do

Old logic is exposed.

The consequences of being wrong

This is the part most GTM conversations skip.

B2B marketing hasn’t grappled with what it actually means to have been wrong. Not just tactically. Foundationally. And the proof is how AI is being deployed right now: layered on top of the same frameworks that already stopped working before GenAI became a thing.

“No one likes to hear this. I don’t like to hear it. But if we have the wrong tool, it’s because it has the wrong logic sequence. It’s embodying our logic sequence, the one that we told it to have.” 

Here’s why this is important to understand if GTM teams want to fix the model:

“Accumulated knowledge and experience goes straight to the heart of our self-concept. As soon as you tell me that 20% of my knowledge is obsolete, I take that rather personally. When I realized that the price of learning is being wrong about what I thought I knew before, I became much more okay with it. Even if your response is, I’m just not going to learn anymore, you’re still wrong. You’re just wrong and frozen.”

Let that sink in for a moment.

Wrong and frozen is not a neutral position. It’s a career-ending one for GTM leaders who can’t explain to their CFO why results keep sliding.

And if you keep hitting a wall with your leadership team because they don’t want to hear the truth, it may be time to update your CV.

What to keep, what to kill

A causal model is what Mark calls “a digital twin of known reality”. It surfaces what’s actually driving outcomes, net of everything you don’t control. And it produces something most dashboards never give you: a stack rank of what’s working.

“You see things change places in that stack rank. If you’re in the bottom third, you need to kill it or figure out why.”

It also tells you time to value. Every tactic has a different lag to results. If you don’t know when something is supposed to pay off, you’ll either kill it too early or keep it too long.

This is where GTM leaders need to step up and call a spade a spade:

“Time lag allows you to set expectations accurately with your executive team. Let’s say, looking in advance, this is going to create a lot of value, but it’s going to take 16 months. If you come to me and complain at month 12, or month 10, I’ll point back and say, You agreed. Here’s your signature.”

That’s not a forecast. That’s a defensible commitment. There’s a difference.

GPS, not dashboards

A causal model doesn’t report on last quarter. It tells you where you are now, what's changed, and what route gives you the best chance of hitting your destination. Judea Pearl calls this causal engineering: not just what happened, but why, and what to do differently.

“It will start to say: you were on a really good route. But things have changed. This is not a good route anymore.” 

If the map is wrong, every route looks optimized. You’re still lost.

Final thoughts

If your GTM is stalling, ask yourself these questions before you approve any new tool or campaign:

  • Can you explain in plain language why you win deals and why you lose them?
  • Do you know which tactics are actually driving revenue, net of market conditions?
  • Are you tracking “no decision” as a first-class outcome?
  • When did you last audit the assumptions your stack is built on?
  • If you doubled activity next quarter, would the underlying logic hold?

If the answers are unclear, you don’t have an execution problem. You have a model problem.

Start there. Write down your current GTM assumptions. All of them. Then ask which ones you’ve actually tested and which ones you inherited. 

That’s the first step. It costs nothing but honesty.

More won’t fix it. Faster won’t fix it.

Fixing the logic fixes it.

Missed the session? Watch it here.

Mark’s full research is on his Substack.

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!