GALAHAD
/Articles
All ArticlesHomeContactWork With Us →
Galahad/Articles/Governance
Governance

Most UK Businesses Are Hiding Behind Brexit to Ignore AI Regulation

Last updated 2026-03-23

I keep hearing the same line from UK leadership teams: 'We're not in the EU any more, so the AI Act doesn't apply to us.' It's become a convenient excuse to do nothing. And it's wrong — not just legally, but strategically. The EU AI Act has extraterritorial reach. If your AI touches an EU citizen, you're in scope. But here's the part nobody wants to hear: the UK's own regulatory patchwork is actually harder to comply with than a single, clear framework. You don't have less regulation. You have more regulation, scattered across more regulators, with less clarity. Brexit didn't simplify your AI compliance. It made it worse.

The Brexit exemption is a fantasy

I've sat in boardrooms where the legal team has confidently declared that the EU AI Act is 'not our problem.' They're wrong, and it's the kind of wrong that gets expensive.

The EU AI Act works like GDPR: it applies based on where the impact is felt, not where the company is registered. If your AI system makes decisions about EU citizens — customers, employees, users — you're in scope. Any UK SaaS platform accessible in the EU. Any employer using AI in recruitment for EU-based staff. Any financial services firm with EU clients.

The organisations treating Brexit as an AI regulation exemption are the same ones that scrambled to comply with GDPR after pretending it didn't apply to them either. We've seen this film before. The ending is fines and rushed remediation.

The UK's approach is worse, not better

The UK government chose sector-specific regulation over a single horizontal framework. On paper, this sounds flexible and proportionate. In practice, it's a compliance nightmare.

You've got the ICO looking at data protection implications, the FCA building AI guidance for financial services, the CMA examining competition effects, the Equality and Human Rights Commission watching for discrimination, and the AI Safety Institute doing frontier model evaluations. None of them are coordinating. None of them have the same definitions. None of them have the same enforcement timelines.

So congratulations: instead of one framework to comply with, you've got five. Instead of one regulator to satisfy, you've got a committee that doesn't talk to itself. The EU approach may be heavy-handed, but at least you know what the rules are. The UK approach is ambiguity dressed up as innovation.

Compliance as competitive advantage is a myth — non-compliance as competitive suicide is not

I'm tired of the consulting line that 'compliance is a competitive advantage.' It isn't. Nobody ever won a deal because their AI governance documentation was better.

But non-compliance is absolutely competitive suicide. When the first major AI liability case hits — and it will — the court won't care about your innovation credentials. It will care whether you did a risk assessment, whether you documented your decision-making, whether you had human oversight on high-risk systems. If the answer is 'no, we were waiting for the regulations to be clearer,' that's not a defence. That's an admission.

The organisations that will survive the first wave of AI enforcement aren't the ones with the best compliance programmes. They're the ones that can demonstrate they thought about risk before they were forced to. The bar is lower than you think — but you still have to clear it.

What you should actually do (it's not what the consultancies tell you)

The standard consultancy advice is: run an AI inventory, classify your systems, build a governance framework, appoint a responsible AI lead. It's all correct. It's also all obvious, and most of it won't get done because it's boring and nobody's accountable for it.

Here's what actually works. Pick your three highest-risk AI use cases — the ones where someone could get hurt, lose money, or get discriminated against. For those three, do a proper risk assessment. Document it. Put human oversight in place. Test the outputs for bias. That's it. Don't try to boil the ocean. Don't build a framework. Don't hire a Head of Responsible AI. Just fix the three things that could actually blow up.

Once those are handled, you'll have enough institutional knowledge to expand the programme. But the organisations that start with the framework and never get to the risk are the ones that end up in the newspaper.

Frequently Asked Questions
Does the EU AI Act apply to UK companies?
Yes, if your AI systems affect EU citizens - through serving EU customers, processing EU data, or deploying AI that impacts EU-based users. The Act has extraterritorial reach, similar to GDPR. UK headquarters doesn't provide an exemption.
What are the penalties for non-compliance?
Fines can reach up to 35 million euros or 7% of global annual turnover for prohibited AI practices, and up to 15 million euros or 3% of global turnover for other violations. These are maximum penalties, but they signal the seriousness of the regime.
When do I need to be compliant?
Enforcement is phased. Prohibitions on unacceptable-risk AI applied from February 2025. Obligations for high-risk AI systems take effect through 2025 and 2026, with full enforcement from August 2026. The time to start preparing is now - compliance infrastructure takes months to build.
How does Galahad help with EU AI Act compliance?
The Galahad Diagnostic includes an AI governance assessment that maps your systems against the EU AI Act's risk tiers. We help build the compliance infrastructure - risk documentation, governance frameworks, human oversight mechanisms - that the Act requires. This is a practical, architecture-level engagement, not a policy review.
Related Articles

Why Your AI Governance Framework Is Already Outdated

Most governance frameworks were written for single-model deployments and human-in-the-loop workflows. Agentic AI breaks every assumption they're built on.

Read article →

Cognitive Security: The Attack Surface Nobody's Watching

AI security isn't just about the model. It's about the humans using it - and the organisational vulnerability chain that starts with one person's prompt and ends with corporate compromise.

Read article →

Your AI Vendor Just Made You the Fall Guy

Read the terms of service. Your AI vendor has already ensured that when the model gets it wrong, you're the one paying. Most organisations haven't noticed.

Read article →

Want to go deeper?

If this article raised questions about your own AI strategy, we're happy to talk it through. No pitch. No pressure.

Start a Conversation →

This article provides general information and opinion. It does not constitute legal, financial, or technical advice. Always consult qualified professionals for decisions specific to your organisation.

Galahad
AI that knows its place. · Founded by Ross Barnes
hello@galahadgroup.co.uk
HomeServicesArticlesikigAIEnableGrailContact
© 2026 Galahad. All rights reserved.London · Global