Blog

Achim’s Razor

Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.

0 Articles
Insight

Illusion of Control in B2B GTM: When Process Becomes Theater

When the market current turns, “more process” becomes theater. A recap of Mark Stouse on GTM headwinds, no-decision deals, and reality-based forecasting.
February 10, 2026
|
5 min read

Imagine you’re scuba diving.

Cruising along a reef wall, barely noticing the current. Then you try to turn back. You can’t. The current is too strong. You panic and your oxygen starts depleting… fast.

That’s GTM in 2026.

In our last Causal CMO chat, that’s the picture Mark Stouse painted as we talked about what happens when the market current flips.

“Doing more” won’t save you. 

Like the scuba diver freaking out, it almost always makes the problem worse.

Takeaways

  • Market conditions decide your “oxygen burn”, so model them carefully or you end up blaming the wrong people. 
  • More than 60% of lost deals are due to “no decision”.
  • Most reporting is political theater built to justify past spend. 
  • Write down your GTM assumptions, then test them with causal models.
  • Your stack isn’t broken. Your model of reality is.

The oxygen problem

Mark’s scuba analogy explains what most dashboards ignore.

Think of “the current” as the externalities out of our control. The headwinds, tailwinds, and crosswinds that affect all aspects of the business, especially go-to-market.

When the current helps you, everything feels easy. Your plays work. Your oxygen lasts.

When the current turns, the same plays cost more and convert less. Not because your team forgot how to sell. Because you’re swimming against something bigger than you. Like buyer behavior.

The ugly part: the moment you need more oxygen is the moment the CFO wants to save air.

If we don’t model market conditions as part of GTM performance, we end up blaming our team for the current. We fire the wrong people. We keep the wrong plays. And wonder why the numbers keep sliding.

Why GTM effectiveness keeps dropping

Mark shared some sobering data from his 5-Part Series on Substack: GTM effectiveness declined 40% between 2018 and 2025.

Evolution of GTM Effectiveness (2018–2025)” showing GTM effectiveness falling from ~0.78 in 2018 to ~0.47 in 2025.

This is not a blip. It’s a systemic break.

Teams kept turning the crank on the same old frameworks while the environment changed underneath them.

Not because they are stupid. Because most GTM systems were built for stable conditions. They never accounted for headwinds, tailwinds, and crosswinds. They just assumed the water would stay calm.

So spend rises. CAC rises. Effectiveness drops.

Mark shared the uncomfortable truth:

70-80% of why anything happens is stuff we don’t control. We like to think we’re masters of the universe, even in some limited domain. But that’s a total fantasy. A fallacy. 

GTM hits the wall first because it is the first part of the company that collides with the market. It’s “the canary in the coal mine.” 

And when the canary chirps, you do not debate the air. You act. Ignore it and you lose the business.

The silent majority: “no decision”

During our chat, I shared a recent real-world closed-lost analysis where over 60% of lost deals ended in “no decision.” That aligns almost perfectly with The JOLT Effect

Diagram showing most B2B deals end in “no decision” rather than choosing a vendor (Source: The JOLT Effect)

Not lost to a competitor. Not lost on price. Lost to inertia.

And Mark’s research shows as much as 4 out 5 deals are now being lost to no decision.

If your GTM motion is built to beat other vendors, but the real battle is against doing nothing, you’re burning oxygen on the wrong fight.

The data does not lie. It also sucks. You still have to face it before you can change it.

If you are not tracking “no decision” as a first-class outcome, you are guessing about why deals stall. And guessing is expensive when the current turns.

When process becomes theater

A lot of “process” exists because teams never did the hard work of naming what creates value right now. 

Not yesterday. And not tomorrow. Right here. Right now.

Reality changes. Value changes with it.

A lot of reporting is political. Defensive. Built to justify money already spent, not to guide decisions.

That is how process becomes theater.

Lots of activity. Lots of slides. No increase in truth.

What should replace it?

Set expectations up front. Write them down. Then report against them.

Leaders avoid tight expectations because it invites judgment. But in 2026, clear expectations have become the position of highest trust.

Trust has lost the cartilage in the joint. Without it, everything feels bone-on-bone. That is what it feels like right now between CEOs, CFOs, CMOs, and boards when performance slips and nobody agrees on why.

Eventually you hit the “come to Jesus moment” where reality always wins. You stop trying to force the story. You deal with what is. 

It’s painful. It’s also freeing. Nobody debates reality.

You will never have all the facts

I asked Mark how he handles teams that resist forecasting because they “don’t have enough data.”

You will never have all the facts. Stop waiting for a condition that will never exist.

His approach is simple:

  1. Write down the unanswered questions about how GTM works as a system.
  2. Write down your current assumptions.
  3. Test those assumptions with causal models. Find what holds, what breaks, and what only holds sometimes.

Bad assumptions do not just mess up tactics. They flow upstream into strategy. You can execute perfectly and still lose because you picked the wrong hill.

No, you do not need to rip and replace your stack

This is not rip-and-replace. Causal AI sits on top of what you already have. The hard part is not technical. It is how people think.

Most teams are trained to chase correlation. That habit creates confidence without proof.

The goal is straightforward: track reality now, track it forward, adjust as conditions change.

You might still arrive late. But you can explain why. You can document variance instead of hand-waving it. That changes decision-making. It also changes trust.

The adventurer mindset

Mark shared a story about his physicist mentor who told him to “use what you know to navigate what you don’t know.”

It reminded me of Neil deGrasse Tyson:

Neil deGrasse Tyson quote: “One of the great challenges in life is knowing enough to think you’re right, but not enough to know you’re wrong.”

Mark turned that into an operating mindset. 

You have been behaving like a librarian, organizing what you already know. Be an adventurer. Care more about what you don’t know than what you do know.

In other words, stop defending the map. Start updating it.

Final Thoughts

If your GTM effectiveness is sliding and the default response is “more leads” or “more output” or “new tools,” pause.

“More” won’t save a bad model.

If your model of reality is based on yesterday (or what you hope it to be), your KPIs won’t guide you. They’ll distract you. They’ll keep you busy while you drift.

A few gut checks:

  • Treat market conditions like the current. They decide your oxygen burn.
  • Audit your reporting. If it exists to defend spend, kill it.
  • Write down your GTM assumptions. Then test them.
  • If “no decision” dominates closed-lost, stop optimizing to beat competitors. Start optimizing to beat inertia.

Most likely, your stack isn’t what’s broken.

Your model of reality probably is.

Missed the session? Watch it here

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

One-Page AI Strategy Template: Replace Roadmaps With Clarity

Stop building AI roadmaps nobody uses. Use a one-page AI strategy canvas to pick one obstacle, set guardrails, and define a metric your teams can execute.
February 3, 2026
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

The CEO asks: “What’s our AI strategy?”

Most teams respond the same way. They spin up a working group, collect vendor lists, and crank out a 30-page deck with a three-year roadmap.

It looks impressive.

But it does nothing for Monday morning.

Your AI strategy should change decisions, habits, and outcomes inside the week. Otherwise it’s just theatre.

Takeaways

  • If it doesn’t fit on one page, it won’t get used.
  • Start with one business obstacle, not a list of tools.
  • Put guardrails in writing so people can move without guessing.
  • AI helps your team solve the problem. It does not solve it for you.
  • If it’s clear, you can execute. If it’s vague, you’ll keep guessing.

What a real AI strategy looks like

A strategy is not “we will use AI.”

That’s like saying “we will use spreadsheets.” It’s meaningless.

A real strategy starts with a business obstacle that hurts. Then it makes a clear choice about what to do next.

For example, in a recent boardroom meeting, we heard the following adoption target:

“20% of our workforce will use AI at least 10 times a day by the end of 2026.”

The point isn’t who said it. The point is what it measures: habits.

Not pilots. Not slide decks. Not “exploration.”

Strategy means trade-offs

Using roadmaps for AI will answer the wrong question.

They answer: “What could we do with AI?”

Leaders need the opposite: “As an organization, how can we be more effective at solving our problems?”

As Michael Porter wrote, strategy is choice. It forces trade-offs. It forces you to decide what you won’t do.

Leaders create success frameworks. If you’re successful, the company will be too. These are the basics of OKRs. 

AI roadmaps avoid that work. They hide weak choices behind more swimlanes.

People don’t read long documents

Most people scan. They read the top, skim the left edge, and move on.

Nielsen Norman Group documented this behavior with eye-tracking research and the F-shaped scanning pattern.

So if your “strategy” needs a 30-minute read and a 60-minute meeting to decode it, it will die.

A one-page strategy survives because it can be:

  • read in two minutes
  • referenced in real conversations
  • reused in weekly decisions
  • remembered without a refresher meeting

One page is a known pattern

Toyota has used the A3 approach for decades: put the problem, analysis, actions, and plan on a single sheet.

Lean Enterprise Institute describes an A3 as getting the problem, analysis, corrective actions, and action plan onto one page.

Same idea here.

One page forces clear thinking and clear communication.

Why? Because real strategy has three parts: diagnosis, guiding policy, measurable target.

Do this Monday morning

One-page AI strategy canvas template with goal, obstacle, opportunity, guardrails, metric, cadence, and first 90 days.

Make a copy of this One-Page AI Strategy Canvas (shown above).

Complete the first page (use the second page example if you’re not sure).

Do one draft only. Try not to tinker or wordsmith.

  1. Goal: What matters most in the next 12 to 18 months?
  2. Obstacle: What is in the way right now?
  3. Opportunity: Where will we use AI first, and why?
  4. Guardrails: Allowed / Restricted / Not Allowed.
  5. Metric: What number proves this worked?
  6. First 90 days: 2 to 3 steps, named and owned.
  7. Cadence: When do we review and adjust it?

Then send it to three people who will actually challenge you: one exec, one operator, one skeptic.

Book a 30-minute call.

Your only job in that call is to tighten the choices.

Avoid committees and drawn-out timelines. They defeat the purpose. 

Guardrails matter more than your tool list

Too often, companies get stuck here:

  • Leaders say “go use AI.”
  • Nobody knows what’s safe.
  • People either freeze or take risks in the dark.

Simple policies beat long policies. The rule of three helps people remember every time.

Try this three-bucket guardrail approach:

  • Permitted: low-risk internal use
  • Restricted: needs a check
  • Forbidden: hard stop

Keeping things in buckets of three creates clarity.

Clarity turns “I’m not sure if I’m allowed” into “I know what I can do right now.”

You’re probably already using AI

Many teams already use AI at least occasionally, even when leadership thinks “we’re still figuring it out.”

Gallup reported daily AI use at work at 10% in Q3 2025 and 12% in Q4 2025, with “frequent use” (a few times a week) at 26% in Q4.

Strategy can’t live in a deck. It has to show up in habits.

Final thoughts

Stop writing AI roadmaps nobody will read.

Start with a one-page strategy people can use.

In the next installment, Gerard and I will show you how to turn the One-Page AI Strategy Canvas into an OKR and keep it alive with a simple 12-month review cadence so it doesn’t become “write it once, forget it.”

If this was useful, forward it to one peer who’s drowning in “AI strategy” swimlanes.

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Strategy

The GTM Reality Gap: Why “More MQLs” Keeps Failing

Fewer than 1% of leads convert. When 77% of buyers say purchasing is complex, more MQLs don't help. Here's what replaces them and how to close the reality gap fast.
January 29, 2026
|
5 min read

Reality is like gravity. It doesn’t care what you or I think about it.

During last week’s Causal CMO chat, Mark Stouse laid out how many B2B tech leaders watch their GTM motion sputter into 2026.

Because here’s what’s true right now: Your CEO wants more MQLs. Your CFO wants proof. And the math says fewer than 1% of leads convert to closed deals in lead-centric processes. 

That’s not a rounding error. That’s a broken model.

Takeaways

  • Reality is like gravity. It doesn’t care what we think about it.
  • When buying gets harder, more volume doesn't help.
  • Pushing more volume almost always feeds waste, not revenue. 
  • Treat GTM like a GPS. “Make it easier to buy” is the best route.

Start With Reality, Not Your Version of It

Mark began by making a critical distinction many of us miss.

He separated truth, fact, and reality.

  • Truth is a knife edge. You fall on one side or the other, and the conversation stops.
  • Facts change. Pluto was a planet. Now it’s not. Pluto didn’t change. Our understanding did.
  • Reality is what’s actually happening right now, regardless of what we want to believe. And Mark’s POV is similar to Jack Welch: 

“You have to be willing to see things as they really are. Not as you might want them to be.”

This distinction is important because most GTM conversations live in truth and taste debates or cling to outdated facts while reality gets ignored (actual win rates, deal velocity, churn drivers).

Four Realities Reshaping GTM in 2026

1. Buying got harder

According to Gartner, 77% of B2B buyers say their last purchase was “very complex or difficult.” When buying gets harder, more leads don’t make buying easier. They just amplify existing problems in your product or sales process. 

More also means more riff-raff and distractions. Quantity does not equal quality.

2. Discovery is shifting to AI

Pew Research found that when Google shows an AI summary, people click traditional search results 8% of the time vs 15% without one. That means your content can exist and still not get visited. Attention is rationed, trust is mediated by machines.

3. Finance wants defensible forecasts

Mark didn’t mince words: 

“It is accountability. It's provable, auditable accountability.”

If you can’t forecast outcomes, budgets tighten. Not because the CFO hates marketing. Because uncertainty costs money.

Mark reframed Opex entirely: it's not an “investment with ROI.” It’s a loan of shareholder money that must be paid back with interest. That means go-to-market spend that doesn’t actively improve revenue, margin, or cash flow is destroying the payback mechanism.

4. Your knowledge is going obsolete

About 16-17% of your expertise becomes irrelevant each year. This isn’t age-related. It’s intellectual intransigence, as Mark put it. 

The leaders who win will be “addicted to learning” (his words). Standing still means you’ll have the relevance of someone five years out of college within five years.

Why More Volume Backfires

In practical terms, Mark put it this way:

“The more stuff in the door argument is an attempt to overwhelm an unpleasant reality under a sea of new, cool stuff.”

If your product is stale, churn is rising, sales cycles drag, or positioning is fuzzy, more volume turns into a megaphone for the problem.

“A great marketing effort on a crappy product or a crappy sales process only makes the company fail faster.”

But the deeper issue is if you’re solving for volume, your GTM structure is probably built for 2016, not 2026. Single-threaded MQL systems assume linear buying and individual decision-makers. The reality is buying involves committees, consensus, and risk mitigation. More leads don’t fix that. They will only make it worse.

Marketo co-founder Jon Miller echoes this too: measurement frameworks are breaking down, marketing is treated like a deterministic vending machine. 

These aren’t separate problems. They’re symptoms of the same shift.

What Replaces MQLs

This isn’t a metric swap. It’s a system (and mindset) shift.

If you need something your CFO will accept, try this:

  • Opportunity creation with buying groups attached (not single leads)
  • Win and loss reasons you can repeat (not anecdotes)
  • Forecast ranges you can defend (even if imperfect)

Forrester’s research backs this: multiple leads contribute to one opportunity, so treating a deal like it came from one MQL misrepresents how revenue actually happens. You’re changing how you think about the entire demand engine.

If you want a deeper dive, check out my 5-Part End of MQLs series.

Run This Diagnostic

OK. The following questions expose system problems, not just performance gaps.

Try to answer them as fast as you can (be honest):

  • Can we explain why we win deals in plain language?
  • Can we explain why we lose deals in plain language?
  • Can we tie churn to specific causes we can fix?
  • Can we forecast within a range we trust?
  • Do we know what buyers fear when they evaluate us?
  • If we doubled MQLs next month, would conversion hold and can we prove it?

If you answered “no” or “not sure” to more than two, you don’t have a lead problem. You have a reality problem.

And that reality problem most likely lives in your operating model, not your tactics.

Where to Start (7 Days)

Day 1: Pull 10 closed-won and 10 closed-lost from the last 6 months. Write the top 3 “why” drivers for each. One page.

Day 3: List where deals actually stall. Not CRM stages. Real moments. Internal disagreement. Budget reset. Security review. Implementation fear. Switching cost. Exec approval.

Day 7: Pick one friction point and fix it for one segment. A clearer proof pack. A tighter implementation plan. A risk-reversal story. A pricing simplification. A buyer enablement asset that helps an internal champion sell you.

Mark's advice on execution was practical: 

“There is no requirement that you confront reality in public. Run a skunkworks. Learn. Make changes quietly.”

The point? You want traction without a re-org. 

Interested in running a skunkworks project? Mark and I already covered that here

Final Thoughts

If you’re clinging to MQLs, it’s most likely because you’re just trying to create certainty based on information you thought was fact and truth.

But once again, reality (like gravity) doesn’t care what we think about it.

Pick one “no” from the diagnostic above. Ask why until you hit a root cause. Then run a seven-day pilot that makes buying easier and risk feel smaller.

You can even use AI to help you. Check out how SaaStr did just that with their Incognito Test

That’s how you close the reality gap. Not by fighting it, but by accepting it and working with it.

Missed the session? Watch it here

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Execution

How SaaStr Runs on 20 AI Agents: Field Notes for GTM Leaders

Jason Lemkin says SaaStr runs on 20 AI agents and 1.2 humans. Field notes on oversight, trust, vendor help, and the incognito test to pick your first use case.
January 13, 2026
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

During a recent (and timely) Lenny’s Podcast episode, Jason Lemkin, cofounder of SaaStr, dropped a few jaws and rolled some eyes (just read the comments) with this:

He used to run go-to-market with about 10 people. Now it’s 1.2 humans and 20 agents. He says the business runs about the same.

That’s a spicy headline.

It’s also the least useful part of the story.

If you’re trying to adopt AI in GTM without breaking trust, below are field notes worth considering.

Takeaways

  • Agents do not run themselves. Someone has to orchestrate them and own quality.
  • Start with one workflow everyone hates. Run it hard for 30 days.
  • Pick vendors who will help you get it live, not the “best” product on paper.
  • Run the incognito test on your own site. You’ll find your first use case.

Let’s cut to the chase… do this Monday AM:

Jason’s most useful advice is not to “immediately replace your GTM team.”

It’s the Incognito Test:

  1. Open an incognito browser
  2. Use a fresh Gmail address
  3. Go to your own website
  4. Try everything: contact sales, request a demo, contact support, sign up, onboard
  5. Track what happens and how long it takes

You will probably find something broken. Like maybe support takes too long. Maybe the contact “disappears” in the CRM ether. Maybe your SDRs take three days to respond (or never). 

Pick the thing that makes you the most angry. That’s your first agent use case.

One workflow. One owner. One month of training. Daily QA.

Then decide if you need a second, a third, and so on.

This is a pattern Gerard and I keep seeing too. 

The real question isn’t “What’s our AI strategy?” It’s “What’s the smallest thing I can do right now that actually helps?”

Source: Lenny’s Podcast, 01:25:29

Now let’s get into Jason’s chat with Lenny

Two salespeople quit on-site during SaaStr Annual. No, not after the event… during.

Jason Lemkin had been through this cycle before. Build a team, train them, watch them leave, rinse and repeat. This time, he made a different call: 

“We’re done hiring humans in sales. We’re going all-in on agents.”

Fast-forward six months: SaaStr now runs on 1.2 humans and 20 AI agents. Same revenue as the old 10-person team.

Source: Lenny’s Podcast, 0:11:52

The detail everyone misses

Jason said this about SaaStr’s “Chief AI Officer”, Amelia* (a real person):

“She spends about 20%* of her time managing and orchestrating the agents.”

Every day, agents write emails, qualify leads, set meetings. Every day, Amelia checks quality, fixes mistakes, tunes prompts. 

This isn’t “set it and forget it.” It’s more like running a production line. A human checks it, fixes it, and tunes the system so it does not drift.

Agents work evenings. Weekends. Christmas.

But if nobody’s watching, the system decays… fast. 

And that’s the part vendors don’t put in their demos.

* Amelia is the 0.2 human who spends 20% managing agents. Jason is the 1.0. 

The proof point that flipped the risk math

Jason’s breakthrough came from a generic support agent. It wasn’t built for sales. It wasn’t trained on sponsorships. But it closed a $70K deal on its own because it “knew” the product and responded instantly at 11 PM on a Saturday.

That’s why this conversation matters.

Not because an agent can write emails. But because one can close revenue if you give it enough context and keep it on rails.

Source: Lenny’s Podcast, 0:07:53

We’ve already seen this too

Jason’s experience lines up with what Gerard and I have also seen in the field.

Start with a real workflow, not an “AI strategy”

  • In our slide deck workflow article, we made the same point. Pick a shared task, get to 80% fast, then spend the saved time on judgment and decisions. 
  • Jason did the GTM version of that. He didn’t start with a transformation roadmap. He started with work nobody wanted to do, then pushed it until it held.

Speed does not fix the wrong thing

  • In our vibe-coding article, we said the prototype is the demo tape, not the album. If you build the wrong thing faster, you just waste time at higher speed. 
  • Jason's version is much more blunt. Train the agent. Correct it. Repeat. Do that for 30 days, and spend time on it every day. No training means bad output at scale.

Adoption is a leadership job

  • In our leadership barriers article, we said that if leaders don’t use the tools, teams won’t either. Keep guardrails simple. Run a small pilot. Show your work.
  • Amelia, Jason's Chief AI Officer, shows visible ownership. One person owns the system. One person runs it daily. That’s what keeps it real.

Final thoughts

Whenever new tech comes along, almost everybody immediately wants the potential upside. 

Almost nobody wants the governance.

It reminds me of the Dotcom boom. Everyone wanted a website. No one wanted to manage the content ecosystem (that’s still true today). 

Agentic AI, like what Jason implemented at SaaStr, is no different. 

Customer-facing GTM output carries real risk for your brand reputation. If your AI agents hallucinate, spam, or misread context, you don’t just lose a meeting. You lose trust.

And that’s the simple truth we keep coming back to:

The constraint is not capability. It’s trust. And trust is earned, not bought (or a bot).

In our AI adoption challenges article, we said tool demos hide the messy middle: setup, access, QA, and the operational reality of keeping this stuff reliable. 

Jason’s story doesn’t remove that mess. It confirms it. He just decided the mess was worth automating.

Watch the full conversation: Lenny's Podcast with Jason Lemkin

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Insight

2026 is the Year GTM Faces Reality: What Leaders Must Unlearn

Why Finance is questioning GTM spend, why “no decision” is rising, and what to do before 2026 budgets tighten. Preview of the Causal CMO series.
December 24, 2025
|
5 min read

Last week, Mark Stouse and I wrapped a preview for a new 4-Part Causal CMO LinkedIn Live series running from January to March. The message is clear and urgent. The rules have already changed. Budgets will follow. So if you’re still defending GTM spend with correlation charts while deals continue to stall, 2026 is going to hurt.

Takeaways

  • Finance is driving the conversation now. Mark says 80-85% of inbound interest to his advisory firm is coming from Finance teams.
  • GTM effectiveness has been sliding for years. The waste is now too big to ignore.
  • “No decision” kills more deals than competitors. That turns CAC into a liability with no offset.
  • Oversight and disclosure expectations keep tightening. Leaders need cleaner controls, cleaner data, and clearer logic.

Four upcoming episodes

Starting in January, Mark and I will cover what GTM teams need to unlearn and relearn in 2026. 

Here’s the schedule: 

  1. Jan 22, 2026: Diagnosing the GTM Reality Gap 
  2. Feb 5, 2026: Process Collapse + The Illusion of Control
  3. Feb 19, 2026: The Stack Scaled the Wrong Assumptions
  4. Mar 5, 2026: Causal GTM as the Bridge

Details for Episode 1 will be posted on LinkedIn in early January.

Now on to the recap.

Finance has entered the chat

Mark’s recent 5-Part Go-to-Market Effectiveness Report triggered over 4,200 DMs in two weeks. And his Causal AI Advisory firm, Proof Causal AI, has been inundated with an influx of inbound activity. Most are not coming from CMOs. And that is troubling. 

“Between 80 and 85% of our inbounds are from Finance teams. The attention from Finance leaders and CEOs was off the charts. CMOs were in the minority.”

That tells you all you need to know about where GTM accountability is headed. Leadership no longer cares about MQLs and busy dashboards. 

They care about: 

  1. Are we getting value for this spend?
  2. Are we adding risk while telling ourselves we are fine?

Part of this shift is legal and governance. 

Delaware oversight expectations and AI disclosure scrutiny have raised the bar on how leaders justify spend and data quality. 

“This is going to be the storyline of 2026 and for sure 2027. In 2027, a lot of this is going to be far more visible and unpleasant.”

Deeper dive: We’ve already covered the Delaware Ruling, and Mark wrote a great piece on the B2B Governance Revolution

Reality is not open for discussion

Mark used the Gartner hype cycle lens to frame what's coming in 2026. When aspiration meets operating reality, you hit the trough of disillusionment.

“Reality is not open for discussion. It just is what it is.”

This echos something former GE CEO Jack Welch said over 30 years ago:

Jack Welch quote: “Face reality as it is, not as it was, or as you wish it to be.”

In GTM, that reality shows up as longer cycles, smaller deals, and more deals ending with no decision.

If your playbook still “works,” but outcomes don’t, there’s your Reality Gap.

The effectiveness slide

Mark’s published work and recent coverage point to a long decline since 2018. In one of his recent MarTech articles, he cites effectiveness falling from 78% (2018) to 47% (2025) across 478 B2B companies.

And no, GTM teams didn’t suddenly get dumb.

“One out of every two dollars in go-to-market is waste today. Why? Because go-to-market teams have been ignoring externalities… the 70 to 80 percent of what causes anything to happen, which is the stuff we don’t control.”

While external forces got stronger and changed faster, most GTM teams kept optimizing internal metrics that do not predict revenue.

No Decision: the silent killer

Founders and leaders should care about this more than anything else.

You can lose to a competitor and still learn something. But if you lose to indecision, you get nothing but sunk cost.

Matt Dixon and Ted McKenna, authors of The JOLT Effect, published a solid proof point: in a study of more than 2.5 million recorded sales conversations, 40% to 60% of deals ended in “no decision.”

Diagram showing most B2B deals end in “no decision” rather than choosing a vendor (Source: The JOLT Effect)

And as Mark explains, the financial impact is sobering:

“If it’s taking your sales teams a lot longer to close smaller deals, and a lot of the deals are closing without any decision, then you have no offset revenue for CAC.”

So when GTM leaders defend spend with correlation, Finance hears: “We can’t explain why buyers aren’t deciding.”

The culture problem: control

After we went off air, Mark dropped the best nugget of the whole conversation:

“There has been a culture, a narrative control, that has been built for decades. And they are deeply concerned about any system that is beyond their control. And the whole thing becomes completely unsustainable.”

That explains a lot:

  • why attribution stays popular
  • why “success” definitions quietly shrink
  • why teams resist anything that makes the story harder to manage

Causality does the opposite. It forces reality into the room. Including the parts you do not control.

And that’s terrifying for teams who’ve built careers on controlling the story.

Headstart for January

Mark’s Tip: You can’t graft new reality onto old systems. You have to unlearn first.

  • Track “no decision” like it’s a competitor. Put a number on it. Diagnose it.
  • Stop defending spend with correlation. It buys time, not trust. Look for causal solutions. 
  • Treat oversight and data quality like risk management. Because courts and regulators already do.

Have a wonderful holiday. See you in January.

Missed the preview session? Watch it here

If you like this content, here are some more ways I can help:

  • Follow me on LinkedIn for bite-sized tips and freebies throughout the week.
  • Work with me. Schedule a call to see if we’re a fit. No obligation. No pressure.
  • Subscribe for ongoing insights and strategies (enter your email below).

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!

Execution

Vibe Coding for Team Clarity: Prototypes in One Go

Vibe coding helps teams get clarity fast: idea > clickable prototype > feedback in one sitting. Treat it like a demo tape, not production code.
December 16, 2025
|
5 min read

By Gerard Pietrykiewicz and Achim Klor

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

Vibe-coding gets hyped as a way to “build an app in minutes.” That’s not the useful part. The useful part is this: it lets you create disposable prototypes fast enough that teams can stop guessing and start reacting. It’s a clarity tool, not a shipping tool.

Takeaways

  • Vibe-coding = fast prototyping, not “build an app in minutes.” 
  • Treat the prototype like a demo tape, not the album.
  • Efficiency comes after effectiveness. 
  • Do not ship vibe-coded output as production.

What changed: it’s all one go now

Many teams (engineering, product, GTM) still work like this:

Write a doc. Explain it in a meeting. Someone turns it into wireframes or copy. Meet again. Then build something so everyone can finally see what you meant. 

That’s a relay race with too many handoffs.

Vibe-coding collapses it into one run. Idea to clickable prototype to feedback in the same sitting.

Fewer handoffs. Fewer miscommunications. Fewer sprints wasted building the wrong thing because everyone had a different mental picture.

Poor requirements are among the top reasons software projects fail, and misalignment between what's needed and what's built drives rework and delays. We've written before about why AI adoption stalls. Vibe-coding addresses that gap early, when changes are still cheap.

The demo tape lens

Back in the 90s, I used a Fostex 4-track to record demo songs on cassette tapes (see pic below). You’d lay down drums and bass first, then guitars, keys, vocals, etc. If you needed more than four tracks, you’d mix some down to free up space. Every time you did that, you’d lose a bit of quality.

Fostex X-18 multitrack cassette recorder with a demo tape loaded.

It was limited and rough. But it got the song down so the band could practice their parts and make decisions.

You weren’t making the album. You were capturing enough structure so everyone could hear the same thing and figure out what worked and what didn’t.

Then you’d take that into a real studio and record it properly. That part was expensive and laborious, so you didn’t want to walk in there still arguing about the chorus.

The demo tape forced clarity before the expensive work started.

Vibe-coding (even GenAI) plays the same role for product work. The prototype is the demo, not the album.

And here’s what people miss: efficiency is a byproduct of effectiveness. If you build the wrong thing faster, you just waste time at a higher speed. 8x0 still equals 0. 

Gerard’s example: 28 minutes to alignment

Gerard recently built a DRIP calculator prototype.

He started with a short Product Design Doc (roles, basic flow, key screens). He used Manus to shape prompts for Bolt, pasted them in, and generated a working front-end prototype.

PDD to clickable prototype: 28 minutes.

Not production code. Not secure. Not deployable. But it had screens, navigation, inputs, and a flow you could walk through with a dev team.

That changes the conversation.

Instead of debating abstract requirements, you click through like a user would and the real questions surface: 

  • What should happen first? 
  • Which fields matter for v1? 
  • What’s truly MVP and what’s just nice to have?

You see where the flow breaks. You see what’s confusing. You see what you can cut without losing the core experience.

Gerard's 28-minute prototype helped the team clarify requirements before dev work began, avoiding the usual back-and-forth about what to build.

Where this pays off

  • You sit with Engineering and realize your “simple MVP” needs three data validation rules, not one. Better now than mid-sprint.
  • You show it to Sales and they point out the objection buyers will have on screen two. They hear the same confusion from prospects every week.
  • You run it past Customer Success and they spot the setup step customers always mess up. The one that generates tickets and causes churn.
  • You show it to Leadership and instead of “make it more intuitive,” they say “this screen right here is the problem.” Now you’ve got something specific to fix.

That’s the “demo tape effect”. You expose weak parts while changes are still cheap.

AI feels kind of like GarageBand 

When Pro Tools came out, home recording got better but it was still too expensive. Hardware, interfaces, software required a budget most musicians didn’t have.

Apple’s GarageBand changed that (bottom right, one of my recent “demo songs”). Suddenly anyone could lay down ideas without dishing out loads of cash. It’s not Logic or Final Cut, but it’s good enough.

(Left) Loveable prototype showing a 13-week rolling work view with tasks by team and week. (Right) GarageBand multitrack session with stacked audio tracks and a drum loop timeline.

AI feels similar to me (top left, one of my prototype “demos” in Loveable). Tools like Claude, ChatGPT, Lovable, and Cursor let me build prototypes much quicker than before. Not production-quality (or well-written). Not secure. But good enough to test an idea and get feedback before committing real resources.

That shift from “you need professionals” to “you can try this yourself” changes who gets to prototype and how fast ideas move.

What it is, what it’s not

Vibe-coding is a fast way to get concrete.

It’s not a replacement for engineering. The prototype is disposable, like a demo in GarageBand. Useful, rough, temporary.

Your real product still needs code standards, reviews, testing, security, proper architecture.

Vibe-coding just helps you walk into that work with fewer assumptions and tighter decisions.

Think of it as the new whiteboarding. It shows everyone the same thing before you start building the real thing.

Looking for more ways to make AI adoption practical? See how we use AI to draft slide decks in minutes.

Again, it’s not perfect, but progress trumps perfection. 

Final Thoughts

Pick one feature you’re planning to build next month. Use a vibe-coding tool to generate a clickable prototype this week. Spend 30 minutes walking your team through it.

Track what happens: How many requirements get clarified? How many assumptions get challenged? How many “oh, I thought you meant...” moments do you avoid?

That’s the value. Not the prototype itself. The alignment it creates.

Once you align on what to build, let your team build the actual product (the right product) without wasting a sprint on guesswork.

If you like this co-authored content, here are some more ways we can help:

Cheers!

This article is AC-A and published on LinkedIn. Join the conversation!