Getting Started with AI Is Easy. Making It Matter Is Hard.

Co-authored by Gerard Pietrykiewicz and Achim Klor

AI rollouts often follow the same script: Leadership announces an initiative, a team lead books a session, another demos a prompt that turns a paragraph into bullet points. People nod and maybe try it once or twice.

Then Monday hits.

Deadlines pile up, Slack is noisy, and a quiet question sits in the background... Am I supposed to use this? Or stay out of trouble?

That is where most adoption dies. Not because people don’t get it. Because nobody told them where the line is, and they have watched enough colleagues get shown the door under the banner of “AI efficiency” to know the cost of guessing wrong.

Takeaways

  • Only 6% of companies capture meaningful enterprise value from AI. The gap is organizational, not technical.
  • Most AI failures are governance failures. Click-and-hope is how production databases get deleted in 9 seconds.
  • A monthly seminar won’t change how people use AI. Clear policies, scoped credentials, safe sandboxes, and visible leadership use will.

The fear is not irrational

Layoffs are real, and so is the framing executives use to justify them.

It usually gets spun as “cost cutting” or “restructuring.” More often than not, that language is covering for poor judgment and weak management. AI just gives bad decisions a more fashionable label. 

Writer’s 2026 enterprise survey of 2,400 executives and employees found that 60% of companies plan to lay off workers who will not adopt AI, and 64% of CEOs fear losing their own jobs if they fail to lead the transition. The same survey found 55% of execs describe their AI rollout as “a chaotic free-for-all,” and 54% say AI is “tearing their company apart.” Stanford’s 2026 AI Index puts a third of organizations on track for AI-driven workforce reductions in the next year.

When leadership talks about AI mostly as a cost-cutting lever, asking the same workforce to enthusiastically adopt it is asking them to hand over the knife.

People aren’t dumb. They notice.

Some experiment in private using personal accounts. UpGuard’s 2025 research found more than 80% of workers, including nearly 90% of security professionals, use unapproved AI tools at work.

And it’s not a training problem. It is a trust and governance problem. It doesn’t get solved with a monthly all-hands or a 50-page policy nobody reads.

What just happened at PocketOS

On Friday, April 25, 2026, an AI coding agent deleted the production database and all volume-level backups at PocketOS.

It took nine seconds.

The agent was Cursor running Claude Opus 4.6, widely considered one of the most capable coding models available.

According to founder Jer Crane, the agent hit a credential mismatch in staging, found a Railway API token sitting in an unrelated file, and decided “entirely on its own initiative” to fix the problem by deleting the volume. No confirmation prompt. No human in the loop.

Here is the part that should keep CIOs and CFOs up at night. The token had been created for managing domains. But Railway’s system gave it full permissions across every operation in the account, including destructive ones.

In other words, a key meant for the front door opened the vault. Yikes!

When asked to explain itself, the agent produced a written confession that started with “I violated every principle I was given” and listed each safety rule it had violated. Crane called it a “systemic failure” that made the incident “not only possible but inevitable.”

Railway’s CEO restored the data using internal disaster backups, but PocketOS still lost more than 30 hours of customer-facing operations and had to fall back to a three-month-old backup for some records. Customers showed up at car rental counters with no booking records to find them by.

Systemic. That is the right word.

The AI did not malfunction. It did exactly what an autonomous agent does when nobody scopes its access or defines what it can and cannot touch.

This is the next phase of the problem. AI is no longer just drafting copy or summarizing meetings. It is taking action against production systems. The cost of getting it wrong is no longer a bland paragraph.

Delegation is not abdication

We saw a similar version of this in How Not To Hire With AI. Recruiters ran candidates through AI screeners, accepted the rankings, and moved on. The bias and the bad calls came out later.

AI just makes that pattern faster and more expensive.

Delegation means you define the task, set the boundaries, and own the outcome. Abdication means you click run and hope.

Too many teams think they’re delegating when they’re not.

The two failure modes

  1. “The tool will handle it.” People treat AI like a vending machine. Prompt in, answer out, ship it. The output sounds fine, which is exactly the problem. It sounds right just long enough to pass through the next person’s review, who is also moving fast.
  2. “I will use it once it is perfect.” Someone tries it once. It hallucinates a citation or breaks a formula. They go back to manual work and wait for the tool to mature instead of learning to work with it. So nothing changes.

One group moves too fast without thinking. The other group never moves.

Both miss the point that AI is not a replacement for judgment, it is a tool that demands more of it.

What separates the companies getting real value

McKinsey’s 2025 State of AI survey of nearly 2,000 organizations found that only about 6% are capturing meaningful enterprise-level value from AI. That’s an organizational gap, not a technical gap.

The high performers do two things differently.

  1. They are roughly three times more likely to fundamentally redesign workflows around AI rather than bolt it onto existing processes.
  2. They are far more likely to have defined human validation rules: 65% versus 23% for everyone else.

Translation: The companies winning with AI have decided in advance which outputs and actions need a human to check the work. Everyone else is improvising.

The same survey found that 51% of organizations reported at least one negative AI incident in the past year. PocketOS is not an outlier. It’s the visible end of a much wider pattern.

What leaders are actually failing to build

Here is what we see most often:

Leadership wants more output and faster execution. They push it down. They expect their teams to figure out AI on their own. When something breaks, the person closest to the keyboard gets blamed.

That’s anything but adoption.

If you want AI to work in your organization, put the scaffolding in place first:

  • Clear policies on what AI can do unattended, what needs review, and what is off-limits. A short list people can hold in their head, not a doc nobody reads.
  • Real governance for agentic tools that take action. Production access, write permissions, and deletion rights need scoped credentials, approvals, logging, and rollback by default. The PocketOS incident was not just a credential problem. It was an autonomous agent with broad reach, finding a key it should never have had access to, and using it without a confirmation step. Railway has already changed its API in response.
  • A safe environment to learn. Sandboxes, defined low-stakes use cases, and permission to try, fail, and report what broke without fear of being walked out the door.
  • Training as scaffolding, not theater. Ongoing, role-specific, tied to actual workflows. Champions inside teams who translate the abstract into the practical Monday-morning version.
  • Visible leadership use. If executives never show their own messy prompts, mistakes, and corrections, nobody else will either.

Training is in there. It’s just not the whole answer, or even the first one. The first is creating a safe place to use the tool, like a skunkworks.

Final thoughts

AI does not create accountability problems. It reveals them. It’s a judgment (or lack of) amplifier.

If your people think clearly and have room to work, it helps. If they are scared, under-equipped, and waiting to be blamed, it scales the problem.

So the question is not “how do we train people on AI.”

The question is where in your organization you are still demanding results without giving people the systems, guardrails, and safety to do the work.

Fix that first.


If you like this co-authored content, here are some more ways we can help:

Cheers!

Achim is a fractional CMO who helps B2B GTM teams with brand-building and AI adoption. Gerard is a seasoned project manager and executive coach helping teams deliver software that actually works.

This article is AC-A and published on LinkedIn. Join the conversation!