AI vendor contracts are masterpieces of risk transfer. They've been drafted by expensive lawyers whose entire job is to ensure that when something goes wrong — and it will — the vendor is insulated.
The standard clause goes something like this: the AI may produce outputs that are inaccurate, biased, incomplete, or inappropriate. The customer is solely responsible for reviewing, validating, and approving all outputs before use. The vendor provides the model 'as is' with no warranty of fitness for any particular purpose.
Translated: we built the thing, but if it breaks anything, that's your problem. You used it. You should have checked.
This would be unremarkable for a spreadsheet application. It's extraordinary for a system that organisations are using to make hiring decisions, generate client-facing advice, assess creditworthiness, and produce medical summaries. The gap between how organisations use AI and how vendors disclaim responsibility for AI is where the liability crisis lives.