GALAHAD
/Articles
All ArticlesHomeContactWork With Us →
Galahad/Articles/AI Strategy
AI Strategy

The Consultancy Trap: Why AI Advisory Is Full of People Who've Never Built Anything

Last updated 2026-03-23

Here's an uncomfortable truth about the AI consulting market: most of the people advising organisations on AI strategy have never built an AI system that went to production. They've written decks. They've facilitated workshops. They've produced maturity models and roadmaps and transformation frameworks. But they've never sat with an engineering team at 2am debugging why the model is hallucinating in production, or redesigned a data pipeline because the training data was poisoned, or told a CEO that the AI strategy they approved six months ago was wrong. The gap between advising on AI and building with AI is the gap between theory and reality — and it's the gap where most AI consulting lives.

The maturity model industrial complex

The first thing most AI consultancies produce is a maturity model. You've seen them: a five-stage framework that places your organisation somewhere between 'ad hoc' and 'optimised,' accompanied by a reassuring recommendation that you need to move from stage two to stage four, and here's the roadmap to get there.

These models aren't useless, but they're deeply insufficient. They describe states without explaining transitions. They benchmark against an idealised end state that may not be relevant to your specific context. And they create a false sense of progress: moving from 'emerging' to 'established' on a maturity model feels like achievement, even when nothing has actually changed in production.

The consultancies that produce these frameworks are selling certainty in an uncertain domain. The maturity model is comforting because it implies a predictable path. But AI transformation isn't predictable. It's messy, iterative, and full of surprises that no framework anticipated — because the people who wrote the framework never encountered those surprises themselves.

Why the gap between advising and building matters

When your AI consultant has never built a production system, their advice is necessarily theoretical. They can tell you what good looks like based on case studies and frameworks. They can't tell you what goes wrong in practice, because they've never been there when it did.

This matters because the hardest part of AI isn't choosing the right model or designing the right architecture. It's the operational reality: data quality issues that only emerge at scale, integration challenges with legacy systems, user adoption problems that no change management framework predicted, and governance requirements that conflict with delivery timelines.

An adviser who's only ever worked at the strategy layer doesn't see these problems until they've already derailed the project. An adviser who's built production systems recognises the warning signs early — because they've been burned by them before.

I don't say this to be provocative for its own sake. I say it because I've watched organisations spend millions on AI strategies from firms that couldn't write a line of code, only to discover that the strategy was architecturally infeasible, operationally naive, or commercially disconnected from reality. The advice was intellectually sound. It just didn't survive contact with the real world.

The Galahad model: build and advise

Galahad exists because of this gap. We built it as the consultancy we wished existed when we were on the client side — one where the people advising on AI strategy are the same people building AI systems.

Our advisory work is grounded in our products: Grail for generative engine optimisation, Lancelot for media quality and fraud detection, Enable for AI capability building. These aren't theoretical offerings. They're production systems that we build, maintain, and operate. When we advise a client on GEO strategy, we've built the platform that measures it. When we advise on media transparency, we've built the system that detects fraud. When we design a training programme, we've built the platform that delivers it.

This isn't about selling products through consulting or consulting through products. It's about ensuring that every piece of advice we give has been tested against the reality of building and shipping. The feedback loop between building and advising is what makes the advice credible — because we discover the problems, the edge cases, and the failure modes in our own work before we encounter them in client engagements.

How to spot the gap

If you're evaluating AI consultancies, ask three questions. First: what have you built? Not what have you advised on — what have you actually shipped to production? If the answer is a list of client engagements with no owned technology, that tells you something about the depth of their technical understanding.

Second: who will be on my team? Not the partner who sold the engagement — the people who will actually do the work. What's their background? Have they built AI systems, or have they built decks about AI systems? The gap between these two profiles is the gap between advice that works and advice that sounds right.

Third: what went wrong on your last engagement? A consultancy that can't articulate failure hasn't been close enough to execution to encounter it. The best advisers are the ones who've learned from their own mistakes — in their own systems, not just in client post-mortems.

The AI consulting market is growing fast, and much of that growth is firms repackaging existing management consulting as 'AI strategy.' Caveat emptor. The organisations that choose advisers based on building credentials rather than brand prestige will get better outcomes — because the advice will be grounded in reality, not in frameworks.

Frequently Asked Questions
What's wrong with using a Big Four firm for AI strategy?
Nothing inherently — some Big Four teams do strong work. The risk is that the team assigned to your engagement may have consulting backgrounds rather than building backgrounds. Ask specifically about the team's production AI experience, not just the firm's client list. The quality of AI consulting varies enormously within large firms, not just between them.
How is Galahad different from other AI consultancies?
We build production AI systems — Grail, Lancelot, Enable — and our advisory work is grounded in that building experience. The people who advise clients are the same people who build and maintain our products. This creates a feedback loop between theory and practice that purely advisory firms can't replicate.
Does every AI consultant need to be a developer?
No. Strategy, governance, and change management are legitimate consulting disciplines that don't require coding. But the consultancy as a whole needs building experience — it needs people who've shipped production systems informing the strategy, even if they're not the ones facilitating the workshop. Advisory without building experience is theory without a reality check.
How do I evaluate AI consultancies effectively?
Ask what they've built (not just advised on), who specifically will work on your engagement and their backgrounds, and what's gone wrong on previous projects. Evaluate their owned technology and whether their advice is grounded in operational experience. Brand prestige and client logos are less reliable indicators than building credentials and candid failure stories.
Related Articles

The Board AI Briefing: What to Say, What to Leave Out

Every CFO and board member has the same three questions about AI. Get the framing wrong and you'll spend the next year defending ROI. Get it right and you move fast.

Read article →

Over 1,000 Senior Leaders Trained. Here's the Thing They All Struggled With.

Running AI enablement training across 1,000+ senior leaders clarifies something fast: the hard part isn't capability. It's deciding where AI stops and human judgement begins.

Read article →

The 90-Day AI Audit: What to Measure, What to Ignore

After 90 days, most AI projects have either proven their value or revealed their flaws. Here's how to separate signal from sunk cost.

Read article →

Want to go deeper?

If this article raised questions about your own AI strategy, we're happy to talk it through. No pitch. No pressure.

Start a Conversation →

This article provides general information and opinion. It does not constitute legal, financial, or technical advice. Always consult qualified professionals for decisions specific to your organisation.

Galahad
AI that knows its place. · Founded by Ross Barnes
hello@galahadgroup.co.uk
HomeServicesArticlesikigAIEnableGrailContact
© 2026 Galahad. All rights reserved.London · Global