GALAHAD
/Articles
All ArticlesHomeContactWork With Us →
Galahad/Articles/Governance
Governance

Your AI Vendor Just Made You the Fall Guy

Last updated 2026-03-23

When an AI model generates a recommendation that loses a client money, produces content that defames someone, or makes a decision that discriminates against a protected group — who pays? If you think the answer is the vendor who built the model, go and read your terms of service. I'll wait. Every major AI vendor has carefully drafted their contracts to ensure that liability flows downstream — to you, the deploying organisation. The model may produce inaccurate, biased, or harmful outputs, and you are responsible for evaluating and validating those outputs. That's not a bug in the contract. It's the entire architecture. Your vendor just made you the fall guy, and most organisations haven't even noticed.

The liability architecture is deliberate

AI vendor contracts are masterpieces of risk transfer. They've been drafted by expensive lawyers whose entire job is to ensure that when something goes wrong — and it will — the vendor is insulated.

The standard clause goes something like this: the AI may produce outputs that are inaccurate, biased, incomplete, or inappropriate. The customer is solely responsible for reviewing, validating, and approving all outputs before use. The vendor provides the model 'as is' with no warranty of fitness for any particular purpose.

Translated: we built the thing, but if it breaks anything, that's your problem. You used it. You should have checked.

This would be unremarkable for a spreadsheet application. It's extraordinary for a system that organisations are using to make hiring decisions, generate client-facing advice, assess creditworthiness, and produce medical summaries. The gap between how organisations use AI and how vendors disclaim responsibility for AI is where the liability crisis lives.

The human oversight fiction

The standard defence is 'human in the loop.' Every AI governance framework includes it. Every vendor points to it. The theory is that a human reviews every AI output before it's acted upon, catching errors and exercising judgement.

In practice, human oversight is a fiction in most organisations. The whole point of deploying AI was to reduce human involvement in repetitive decisions. If a human reviews every output, you haven't automated anything — you've just added a step. So what actually happens is that human oversight starts rigorous and degrades over time. The first week, people check everything. The first month, they spot-check. By quarter two, they're rubber-stamping. By quarter three, the 'human in the loop' is a box that gets ticked without anyone actually looking.

This is predictable. It's human nature. And the courts will see right through it. When the first major AI liability case goes to trial, the defence of 'we had a human review process' will collapse the moment opposing counsel asks: how many outputs did the reviewer actually change? The answer will be close to zero, and that will prove that the 'oversight' was cosmetic.

The insurance industry sees it coming

If you want to know where AI liability is heading, follow the insurance market. Insurers are pricing AI risk before regulators have finished writing the rules — and what they're seeing is making them nervous.

Professional indemnity insurers are starting to ask AI-specific questions on renewal forms. Does your organisation use AI in client-facing advice? In automated decision-making? In content generation? The premiums are starting to reflect the answers.

Cyber insurers are adjusting policies to clarify whether AI-generated errors are covered. Many standard policies were written before AI deployment was common, and the exclusions are ambiguous. Expect clarity — in the form of higher premiums or explicit exclusions — within the next twelve months.

Directors and officers insurance is the one to watch. If an AI system causes material harm and the board didn't have adequate governance in place, that's a D&O claim. And D&O insurers are already asking: what AI governance framework does the organisation have? If the answer is 'we're working on it,' the premium reflects the risk.

The insurance industry is telling you something: the liability clock is already ticking, and 'we'll deal with governance when the regulations are clearer' is now a quantifiable business risk.

Three things to do before the first case lands

The first major AI liability case in the UK will set precedent that affects every organisation using AI. You want to be on the right side of that precedent. Here's what that requires — and it's less than you think.

First, read your contracts. Actually read them. Every AI vendor agreement, every SaaS tool with AI features, every API integration. Understand exactly what liability you've accepted and whether your insurance covers it. Most organisations will discover gaps they didn't know existed.

Second, match your governance to your risk. Not every AI use case needs a review board. Your internal meeting summariser doesn't need the same oversight as your automated credit decisions. But the high-risk use cases — anything that affects someone's money, employment, health, or legal rights — need documented risk assessments, meaningful human oversight, and audit trails that will survive scrutiny.

Third, document your decisions now. When the case lands, what will matter is whether you can demonstrate you thought about AI risk and took reasonable steps. Not perfect steps. Reasonable steps. A documented risk assessment, a governance process that actually operates, and evidence that someone senior was accountable.

The organisations that do these three things will survive the first wave of AI litigation. The ones that don't will wish they had.

Frequently Asked Questions
Who is legally liable when an AI system causes harm?
In most jurisdictions, the deploying organisation bears primary liability for AI outputs, regardless of who built the model. Vendor contracts typically disclaim liability for model outputs. The organisation is expected to perform due diligence, implement guardrails, and monitor outcomes. Specific liability allocation depends on the use case, jurisdiction, and applicable regulations.
Does the EU AI Act apply to UK organisations?
If you serve EU customers or your AI systems affect EU residents, the AI Act likely applies to your operations. Even if it doesn't apply directly, UK regulators are developing aligned frameworks, and the standards set by the AI Act are becoming de facto global benchmarks. Building to these standards now is pragmatic risk management.
How do we determine the risk level of our AI applications?
Assess each AI use case against two dimensions: severity of potential harm (financial, reputational, safety, discrimination) and likelihood of that harm occurring. High-severity, high-likelihood applications need the most rigorous governance. Low-severity, low-likelihood applications can operate with lighter controls. Document the assessment and review it regularly.
Should we wait for clearer regulation before building an AI governance framework?
No. Waiting creates two risks: accumulated liability from ungoverned AI systems, and unfavourable treatment from regulators who reward proactive compliance. The direction of regulation is clear even if the details are still emerging. Building a proportionate governance framework now is significantly cheaper than retrofitting one after an incident.
Related Articles

Why Your AI Governance Framework Is Already Outdated

Most governance frameworks were written for single-model deployments and human-in-the-loop workflows. Agentic AI breaks every assumption they're built on.

Read article →

Cognitive Security: The Attack Surface Nobody's Watching

AI security isn't just about the model. It's about the humans using it - and the organisational vulnerability chain that starts with one person's prompt and ends with corporate compromise.

Read article →

Most UK Businesses Are Hiding Behind Brexit to Ignore AI Regulation

Brexit didn't exempt you from AI regulation. It just gave you an excuse to pretend it doesn't apply. The EU AI Act has extraterritorial reach, and the UK's own patchwork is worse, not better.

Read article →

Want to go deeper?

If this article raised questions about your own AI strategy, we're happy to talk it through. No pitch. No pressure.

Start a Conversation →

This article provides general information and opinion. It does not constitute legal, financial, or technical advice. Always consult qualified professionals for decisions specific to your organisation.

Galahad
AI that knows its place. · Founded by Ross Barnes
hello@galahadgroup.co.uk
HomeServicesArticlesikigAIEnableGrailContact
© 2026 Galahad. All rights reserved.London · Global