Positioning, Messaging, and Branding for B2B tech companies. Keep it simple. Keep it real.
GTM teams trip over each other because of culture, not strategy. “Better At” is a mindset that can shift your team from competing for credit to actually getting better at working together. Be “Better At” curiosity, not control; showing up to contribute, not one-upping. If you’re leading a B2B team and you’re tired of the same old drama, this one’s for you.
We don’t need more clever acronyms or prompts; another playbook or dashboard.
We need braver marketers. People who care more about showing up and improving their tribe.
That’s what Tracy Borreson and I got into during her Crazy Stupid Marketing podcast.
We started with marketing. But the conversation kept pulling us deeper into mindset, culture, and how GTM teams can stop tripping over each other.
Our discussion built on what I shared earlier in this LinkedIn Pulse article and led to a thoughtful question:
What happens when we stop trying to be better THAN each other… and start getting better AT helping each other?
This idea started in a personal place. A strained conversation. A moment that reminded me we’re all just doing our best with what we’ve got.
And a quote from Epictetus.
“These reasonings do not cohere: I am richer than you, therefore I am better than you; I am more eloquent than you, therefore I am better than you. On the contrary these rather cohere: I am richer than you, therefore my possessions are greater than yours; I am more eloquent than you, therefore my speech is superior to yours. But you are neither possession nor speech.”
Epictetus, Enchiridion, Chapter 44
That stuck with me.
Being better at something doesn’t make you better than someone.
And if you’re better at something, what if you helped someone else become better at it too? What if they got better at it and showed someone else?
That’s the heart of it.
Better At is about think how we can improve others while improving ourselves.
We grow. We pay it forward. We do the work and learn together.
There’s an old joke that goes something like this:
How many marketers does it take to screw in a lightbulb? Just one. But all the others think they can do it better.
Every aspect of an organization is full of this kind of “better than” behavior, not just marketing.
We chase credit, one-up each other, and cover our asses by throwing each other under the bus.
We can’t help it. It’s systemic and it starts at an early age.
It kills effectiveness and alignment no matter how efficient or better we think our silos are.
0 Effectiveness X 5 Efficiency = 0
If cross-functional teams can’t co-create value, no amount of leadgen and demandgen will save them.
Better than creates friction. Better at creates connection.
Creating a culture of curiosity starts by changing what you reward.
Healthy competition is good. Sports is a good example.
But you can’t win the hearts and minds of your teammates by always competing with them or looking for the “easy button” to make yourself bigger than you are.
AI can help you be better at marketing. But only if it sharpens your thinking, not replaces it.
If your team’s output feels generic, the problem isn’t the tool. It’s the fear behind how it’s being used.
Be generous, empathetic, and useful. Ask better questions. You have to be the change you want to see. That’s how you get better at making better contributions.
If you lead a B2B tech company and this resonates, here are 3 things to consider:
You don’t need a re-org. You just need a shift in thinking.
When GTM teams work together, the impact shows up in shorter sales cycles, better conversion rates, and less wasted spend.
You don’t need to overhaul your GTM strategy overnight. But what if you started asking different questions?
Take these into your next leadership meeting:
Better At isn’t a tactic, a course, a playbook. It’s a mindset.
So… where are you trying to be better than, when you could be better at?
Let’s talk about it.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
Many GTM teams still track activity instead of impact. Bayesian models estimate what likely played a role. But they don’t explain what caused the result, or what might have happened if something changed. Causal models test and measure what contributed and why.
As mentioned in last week’s Razor, Bayesian models can help marketers get out of the last-touch attribution trap. They give us a way to estimate how likely it is that something contributed to a particular result.
But that’s not the same as knowing what caused the result and why.
Too many GTM teams still confuse probability with proof; correlation with causation. But more precision does not mean more truth.
Causal models answer a different question: what would have happened if we had done something else? That’s what your CFO wants answered. And it’s the one your current model can’t touch.
We need to ask better questions instead of defending bad math.
Much of this discussion was sparked by Mark Stouse on LinkedIn. He clarified a common misconception: that Bayesian modeling is the same as causal inference. It’s not. And that distinction is what we’re getting into.
The past is not prologue.
Mark Stouse, CEO, Proof Causal Advisory
Most attribution models are shortcuts, not models.
Rule-based. Last-touch. “Influenced” revenue. They’re easy to run. Easy to explain. But disconnected from real buying behavior.
Attribution measures who gets credit, not contribution.
Bayesian modeling doesn’t rely on a single touchpoint or fixed credit rule. It estimates how likely it is that something played a role, like a channel, sequence, or delay.
Bayesian models give you a better approximation of influence than rule-based methods. But they stop short of answering the causal question: What made this happen and why?
Most attribution models never get past basic association (correlation).
As Jacqueline Dooley explains in this MarTech article, rule-based methods don’t reflect how buying actually works. They measure what happened, not why it happened.
In other words, most GTM teams are still stuck in Level 1 of the causal ladder.
Bayesian models help you estimate whether something played a role. Not how much credit to assign.
That’s why they help measure things like brand recall, ad decay, and media saturation. They estimate influence but they don’t explain the cause.
Mark isn’t the only one pushing for clarity here.
Quentin Gallea wrote an excellent article on Medium that details how machine learning models are built to predict outcomes, not explain them. They’re correlation engines. And when teams mistake those outputs for causal insight, bad decisions follow.
If your model only shows what happened under existing conditions, it can’t tell you what would’ve happened if something changed. That’s the whole point of causal reasoning.
Causal AI tools like Proof Analytics (shown below) helps teams run “what if” scenarios at scale. It uses machine learning to handle the messiness of data with causal logic to explain what can actually make an impact.
Causal modeling shows what might have happened if you changed something, like timing, budget, message.
That’s the question your CFO is already asking.
As Mark pointed out, Bayesian models can’t answer that. Unless you impose a causal structure, they just update likelihoods based on what already occurred.
If you’re only predicting what’s likely under existing conditions, you’re stuck in correlation.
As mentioned, GTM dashboards show you what happened, like clicks. They don’t tell you what contributed to those clicks and why.
Bayesian models help you spot patterns.
That’s useful. But it’s not enough.
Why? Because even though Bayesian models are probabilistic, they don’t model counterfactuals unless a causal structure is added. They estimate likelihoods, not outcomes under different conditions.
If you want to know whether something made a difference (or what would’ve happened if you did it differently) you need a model that can test it.
So instead of more data, focus on the data you already have.
By the way, if you’ve never looked at buyer behavior through a statistical lens, Dale Harrison’s 10-part LinkedIn series on the NBD-Dirichlet model is worth bookmarking. This series will help you understand how buyers typically behave in a category:
Rule-based attribution like first/last-touch only tracks what happened. It doesn’t explain what mattered.
Bayesian modeling gets you closer by helping you see patterns. But it doesn’t explain cause.
Causal models let you test what could make an impact, what may not, and why.
And as Mark Stouse pointed out, this only works if you’re using a proper causal framework. Bayesian models can’t tell you what caused something unless that structure is built in.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
Traditional attribution models do not help marketers. Last-touch attribution winds up becoming click-based marketing metrics that rarely hold up when the CEO or CFO asks, “Where’s the revenue?” Bayesian models offer a better way to measure what’s actually impacting the bottom line, building the brand, and influencing pre-funnel activity. This article shows you how to measure brand impact using Bayesian attribution models, especially for B2B teams tired of broken funnels.
A Bayesian model helps you set and update expectations based on new evidence. Unlike traditional attribution, Bayesian methods can surface marketing impact pre-funnel.
Think weather forecasts: You start with what you know (like the season), then adjust your expectations based on new clues (like thunder).
In marketing, Bayesian modelling weighs each channel’s influence based on how often it contributes to a sale, how recently it was seen, and how it interacts with other touchpoints.
Bayesian and causal models can overlap, but they’re not the same. Bayesian models estimate probability, like how likely something is based on data and prior beliefs. Causal models estimate what happens when something changes. The strongest marketing analytics use both: probabilistic thinking to handle uncertainty, and causal structure to guide decisions.
If you want to geek out a little more, Niall Oulton at PyMC Labs wrote an excellent piece on Medium about Bayesian Marketing Mix Modeling.
It’s a great place to diver deeper.
Instead of guessing or oversimplifying, Bayesian modeling uses probability and real-world behavior to show what actually contributed to a sale, and how much.
Elena Jasper provides a good explanation using a research paper published in 2022, Bayesian Modeling of Marketing Attribution. It shows how impressions from multiple ads shape purchase probability over time. In a nutshell, too many impressions (especially from display or search) can actually reduce the chance of conversion.
Even more insightful, the model gives proper credit to owned and offline channels that traditional attribution ignores.
Check out Elena’s Bayesian Attribution podcast episode.
This is where things get interesting for brand builders.
Another study from 2015, The Impact of Brand Familiarity on Online and Offline Media Effectiveness, used a Bayesian Vector Autoregressive (BVAR) model to track synergy between media types.
Here’s what they found:
In other words, the value of your brand influences how effective your media is. So if you’re only looking at last-touch clicks, you’re missing the bigger picture.
This chart compares how different types of media synergy play out based on brand familiarity. Familiar brands benefit more from reinforcing messages within the same (online) channel. Unfamiliar brands get a bigger boost from cross-channel combinations, especially from pairing digital with offline.
SOURCE: The Impact of Brand Familiarity on Online and Offline Media Effectiveness (Pauwels et al., 2015), See Section 4.4, Table 3
Bayesian models can also account for non-converting paths. That means they help you see how early exposures from media like TV, radio, podcasts, branded search, and earned media changed the likelihood of a purchase, even if the customer didn’t buy right away.
The ability to prove that your brand is being remembered is the holy grail of brand marketing.
This chart compares how two attribution models assign credit for a conversion. Bayesian models are better suited for evaluating pre-funnel impact. They account for influence, not just transactions.
These models don’t deliver hard proof. They provide probabilistic estimates, like how likely each channel or impression influenced conversion, even when no one clicks. It’s not deterministic, but it’s a far better approximation of real buyer behavior.
In a nutshell, memory and exposure matter, even when they don’t lead directly to a form fill.
When you start combining that with media decay rates and interaction effects, you finally have a way to show how long your brand-building efforts stick and when they fade.
SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022), See Section 4.2.3: “Interaction Effects”
This chart shows how quickly a single ad loses its persuasive power. Influence fades exponentially, especially for short-lived channels like display or search. This is important for building brand reputation because a memorable first impression doesn’t last forever. Brand building isn’t one and done.
This supports what the Sinha Bayesian attribution paper modeled: ad influence is not equal, and timing matters.
SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022). See Section 4.2.2: “Direct Effect”; Figure 5: Posterior distribution of ad decay parameters
This chart shows saturation and how conversion probability builds with more ad impressions, then flattens out. Most SaaS GTM (self-serve, freemium, free trial) ramp up fast, but fatigue soon after (peaks around 12 impressions). Enterprise GTM builds more slowly, but needs more impressions to hit its ceiling (closer to 25 impressions).
Regardless of the model, impressions lose influence over time. That’s ad decay in action. But the number of impressions it takes to move the needle? That’s where most SaaS solutions and enterprise solutions part ways.
SOURCE: Bayesian Modeling of Marketing Attribution (Sinha et al., 2022), See Section 4.2.3: “Interaction Effects”; Figure 7: Negative interaction from high ad frequency. Real-world ad-to-pipeline benchmarks from WordStream, Databox, and SaaStr.
Most brands aren’t ready to run full Bayesian models. That’s OK.
It’s better to tackle the low-hanging fruit and build from there:
So if you’re only measuring what’s easy to measure, you’ll keep spending money in the wrong places and frustrating your exec team.
Like Causal models, Bayesian models are essential B2B marketing analytics. Relying on click-based attribution hides where budget is wasted and where your brand building is pulling weight.
Causal and Bayesian models aren’t mutually exclusive. Bayesian Structural Time Series, for example, blend both and help estimate impact while accounting for timing, media decay, and external variables.
These models and tools help us make smarter marketing decisions.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
Many GTM teams still ask the wrong questions. It’s costing them trust, clarity, and pipeline. Questions centered on critical thinking, buying behavior, and brand recall help GTM teams shift from chasing short-term results to building long-term buyer confidence. If you want better outcomes, if you want to earn trust, ask better questions.
Missed the first 4 parts?
Part 5 gets into how GTM teams can stop the pursuit of more MQLs and focus on earning confidence and trust by asking better questions.
Because if you’re still asking, “How many leads did we get this week?” you’re not solving for what’s stalling your growth.
When pipeline is soft or revenue is lagging behind, the same questions always come up:
That’s leadgen panic in action. And it’s a recipe for bad decisions and looking at the wrong metrics.
Why? Because when we’re reactive we make knee-jerk decisions that put us into a spin cycle.
Instead of solving real problems, we tend to default to short-term busy work and track vanity metrics that look good on dashboards. It’s the reason why so many funnel stages are filled with uninterested buyers and unqualified activity.
FYI, funnels are another outdated trap, but I digress.
It’s best to step back and ask questions that put customers, buying behavior, and timing at the center.
You’ll get better answers for your marketing, your buyers, and your data.
If you’re marketing is broken, more MQLs won’t fix it. Asking better questions will.
Here are a few worth considering:
You get the idea.
Without knowing what’s causing poor marketing results, we remain forever trapped in the leadgen spin cycle. Always reacting. Always chasing our tails.
Once we know why something is happening, the path forward becomes much clearer. It takes the pressure off so we can focus on moments that actually make an impact.
For more, see 12 Questions Every B2B Tech Marketer Should Ask In 2025.
In a recent LinkedIn post, Mark Stouse, CEO of Proof Causal Advisory, posed 12 questions to ask when there’s a push to make things “easy and simple” at work.
Simplifying for clarity is valuable, sure, but oversimplifying complex solutions just to make them feel and sound “easy” is misleading.
Are we simplifying tasks to enhance understanding, or are we oversimplifying to the detriment of critical thinking? Oversimplification can mask underlying issues and distract us with vanity metrics like MQLs instead of focusing on genuine engagement and earning trust with our audience.
That distinction matters, especially in B2B tech where everyone looks the same, smells the same, and says the same things.
When we create MQL factories we lose nuance, oversimplify the process, and skip the hard thinking. And when that happens, we chase the wrong outcomes, ask the wrong questions, and miss what really drives revenue.
Asking better questions demonstrates critical thinking. It’s how mature marketing teams challenge default assumptions and make better decisions.
You don’t need to make your strategy easy. You need to make it effective.
The 6sense 2024 Buyer Experience Report shows that 81% of buyers have already picked their vendor before they ever contact sales.
And they’re doing it quietly. In their own time. On their own terms.
So if your GTM team is focused only on MQLs, you’re way too late to the party.
Buyers want to feel confident. They seek clarity, proof that you’ve done this before. They put their feelers out months before you put them in your funnel.
Ask yourself:
Signals like that build pipeline you can trust.
One of the best questions any founder, CMO, or GTM team can ask is:
“What do we want to be remembered for?”
Because being remembered is more valuable than having a better widget.
You can have a great product and still lose. You can offer better services and still be ignored.
The brands that win are the ones that buyers remember. The ones that help, not pitch. They show up early. They stay visible.
They also know B2B buying is not a linear path to fast money. They understand what brand recall looks like.
B2B Marketing’s fixation with MQLs gave us volume, not clarity. Like drugs, it rewarded short-term fixes and eroded brand health.
If you want marketing to truly drive revenue you need to teach the business how it actually works. And that starts with better questions.
Kerry Cunningham also wrote an excellent piece about The De-Industrialization of B2B Marketing. It’s worth reading and sharing with your leaders.
Thanks for following along this 5-part series on The End of MQLs. If you missed the others, catch them on Achim’s Razor.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
B2B buyers don’t buy on your marketing timeline. This is why B2B marketing ROI shows up months after the campaign. The sales we generate today is because of the marketing we did 2-3 quarters ago (sometimes longer). Time lag impacts pipeline so much that sales teams often misjudge it. Resetting expectations is the only way to stop chasing fake deal velocity.
We’ve already covered a lot of ground when it comes to the MQL trap:
Part 4 gets into why results take longer than anyone wants to admit and how to build a GTM strategy that respects the clock buyers are actually on.
Even if you do everything right—launch the right campaign, reach the right people, hit the right message—you may not see anything in the pipeline for months.
Why?
Because buying is harder than selling. It’s slow, non-linear, and cautious.
Dreamdata’s analysis shows it takes at least 6 months for most B2B marketing efforts to show up in pipeline.
That means this quarter’s results happened because of the marketing you did 2-3 quarters ago. And it can take much longer if you sell complex and pricey solutions.
One of the biggest mistakes B2B tech companies have made is expecting marketing to operate like a vending machine.
“The ‘evidence’ for how the B2B GTM system operates has existed primarily in the minds of those who believed that they could make it a deterministic gumball machine. They adopted a ‘if this, then that’ sequencing mentality that does not reflect real life, and then they pounded audiences for 20 years to ‘generate demand.’ At no time was this about the customer. It was always about the hockeystick.”
Mark Stouse, CEO, Proof Causal Advisory
Even if marketing works, it won’t look like it worked, at least not right away.
For more, see: Purchasing Timelines in B2B
Humans tend to exaggerate and distort events. It happens a lot with victims and witnesses of crime. What seems like seconds is actually minutes.
And just like eyewitnesses underestimate how long something took, we tend to compress the buying timeline, focusing only on what we can see (the sales conversation), not what came before it.
Here’s a pattern you’ve probably seen:
The reality is that the buyer started researching months before the first sales call.
Think of marketing like farming. You don’t plant seeds, dig them up a month later, and call it a day because they didn’t grow fast enough. Let your marketing take root.
Most CEOs and CFOs want to measure the impact of marketing right now. But early-stage marketing (especially brand work) doesn’t show up in pipeline for months.
Set expectations by tracking and reporting these metrics:
Important: Good forecasting starts with data integrity. If your CRM isn’t capturing the entire buying journey, your plan is built on guesses.
Above all, help educate your executive team on what early traction and long-term brand building looks like. Remind them to be patient. This isn’t a quick fix.
B2B marketing ROI doesn’t happen in the same quarter, but marketers still struggle to measure ROI beyond six months, leading to pressure to focus on short-term metrics.
That disconnect creates pressure to chase the wrong metrics like MQLs.
Do this instead:
If the leadership team expects marketing to create pipeline in 90 days, they’re not wrong. They’re just misinformed, and it’s Marketing’s job to help them understand.
B2B buyers have their own timelines, you have to respect the time lag that is unique to their buying cycle.
You also have to give brand marketing time to take root and deliver long-term ROI. That’s how you stay relevant and earn trust.
Next week, Part 5 wraps up the series by getting into critical thinking and asking smarter questions.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!
Most buyers already know who they’ll buy from before they fill out a form. If you’re not remembered when they’re back in market, you don’t make the cut. This article covers how brand-building for B2B tech helps you get on the shortlist and how to prove it’s working in ways your CEO and CFO will actually care about. If you want to build brand in B2B tech, focus on being remembered instead of being the best.
By the time most B2B buyers contact a vendor, the deal is already more than halfway done. As covered in Part 2: Real Buying Signals, the shortlist is set more than 80% of the time.
So if buyers have their winner selected on Day 1, how do you make sure it’s you?
Brand-building is your best defence because it won’t matter how good your product is if no one remembers you.
And being remembered is more valuable than having a better widget.
The 95:5 Rule says that at any given time, only 5% of your market is actively looking to buy. Sadly 3 out of 5 deals end in no decision because that’s the safest decision, especially when confidence and trust have yet to be earned.
The remaining 95% won’t take your calls, answer your emails, or fill out your forms because they’re not ready. But they are watching. Learning. Clicking your ads or downloading your PDFs is not buying intent. It’s merely curiosity (assuming they are human and not bots).
Every time you show up on their radar, you earn mental availability for when the time is right. And when that happens, the brand they remember is the brand they add to their list.
Brand memory reduces perceived risk because reputable brands instil confidence and trust.
When buyers see your brand consistently over time, they perceive you as legit, even if they have never engaged with you directly before.
Research from Forrester and McKinsey shows that the Divide Between CMOs and CEOs is Growing.
It’s not that CEOs and CFOs don’t value their brand. It’s because Marketing struggles to demonstrate its direct impact on revenue.
As long as brand initiatives are perceived as disconnected from tangible business outcomes, they risk being sidelined in favor of strategies with more immediately measurable returns.
Important: Marketing does not actually create revenue. But it can positively impact revenue when executed effectively both short-term and long-term. That in turn mitigates risk, earns buyer trust, and drives future growth (things CEOs and CFOs care about).
First, don’t rely on marketing-sourced metrics like last-touch attribution. As discussed in Part 1: Why MQLs Don’t Work, that’s a surefire way to get shown the door.
Instead, move away from granular MQLs and consider buying group signals that are part of an opportunity (this includes AgenticAI).
Example: Three different people from the same company downloading the same PDF is more valuable than one person downloading three different PDFs.
The metrics below show whether or not your brand is building mindshare.
Brand-building is not the opposite of performance marketing. It’s the precursor for it.
If you want to be the vendor buyers remember, you have to earn their confidence and trust before they start their buying cycle again. There are no shortcuts.
You can get on the B2B buyer’s shortlist by showing up consistently where they hang out. When they’re ready, they’ll remember you.
If you’re new to the game and they don’t know much about you, start building up your brand reputation and mitigate as much risk as possible.
Stay visible. Be generous.
Next week will dig into Part 4, how set expectations around time lag.
If you like this content, here are some more ways I can help:
Cheers!
This article is AC-A and published on LinkedIn. Join the conversation!