Your AI Vendor Caps Their Liability at One Month's Fees. You're on the Hook for Millions.

Does your AI vendor contract cap their exposure at one month's fees while you're on the hook for millions?

Most enterprise AI vendors provide the model. You provide the liability.

The Situation

When your AI vendor's model produces a biased output, a hallucinated result, or a discriminatory decision, your contract almost certainly limits their exposure to a modest multiple of the fees you've paid them. Recent analysis of AI vendor agreements found that 88% impose liability caps in their favor, and only 17% provide any warranty that their product complies with applicable laws. The regulatory fines and class action exposure that follow an AI failure are yours to absorb, not theirs. That's not an oversight in the contract. It's the contract working exactly as the vendor intended.

The Exposure

Most AI vendor contracts limit indemnification to IP infringement claims, which means discrimination, bias, and regulatory compliance failures fall entirely outside vendor coverage and entirely on you. That 17% is concentrated in enterprise contracts negotiated by sophisticated buyers. If you accepted standard terms, you likely aren't in that 17%. Under the legal theory recently advanced in Mobley v. Workday, a federal court allowed claims to proceed on the basis that an AI screening vendor may have acted as the employer's "agent" which means both the vendor and the client may be forced to face liability simultaneously, even when the contract says otherwise.

The Judgment Call

If you accepted standard terms, you accepted standard risk allocation, which means the vendor's exposure is capped at fees. Your governance framework is the most credible tool you have to change that at renewal. An organization with documented bias testing, audit logs, and human oversight protocols represents lower risk of an AI failure event - for you, your insurers, and for the vendor who might otherwise be pulled into litigation alongside you. Courts in several jurisdictions are also starting to refuse enforcement of indemnification clauses that shift anti-discrimination liability to the client, which means the leverage is shifting. Your realistic target isn't eliminating caps; it's securing carve-outs for the categories that actually matter: compliance failures, discrimination claims, and data breaches.

  • Risk: Reopening liability terms at renewal can stall contract execution, and vendors who feel their standard terms are being challenged may escalate negotiations to legal teams, adding cost and delay to what should be a routine renewal.

  • Benefit: Even partial wins - a carve-out for discrimination claims, an audit right, or a compliance warranty tied to your jurisdiction - directly reduce legal exposure and strengthen your position with cyber insurers who are increasingly asking how AI vendor risk is contractually managed.

This Week’s Action

  • What to do: Pull the liability and indemnification section from your two highest-risk AI vendor contracts and identify three things: whether their exposure is capped at monthly or annual fees, whether they provide any warranty for regulatory compliance, and whether your firm is required to indemnify them against third-party claims arising from AI outputs.

  • Who to involve: Your General Counsel or outside contract counsel, and your Head of Procurement.

  • What outcome to achieve: A one-page gap summary for each contract identifying the terms most misaligned with your actual regulatory exposure, timed to each vendor's next renewal window.

  • Time required: 45 minutes to pull and review; 30 minutes with counsel to prioritize the gaps.

Artifact

Use this before your next vendor renewal. The goal isn't eliminating caps because vendors won't accept that, and courts generally uphold them in B2B contracts. The goal is carve-outs and super caps for the categories where your regulatory exposure is highest.

  • Compliance Failure Carve-Out: Is liability for the vendor's failure to comply with applicable AI, privacy, or anti-discrimination law carved out of the standard fee-based cap, or subject to a separate, higher "super cap"?

  • Model-Level Bias Indemnification: Does the vendor conduct pre-deployment bias testing, and does their indemnification cover discrimination claims attributable to the base model as distinct from bias introduced by your data or deployment configuration?

  • Regulatory Compliance Warranty: Does the vendor explicitly warrant that their model complies with applicable law in your jurisdiction, or do they disclaim compliance entirely? It's a negotiation target, not a standard term.

  • Audit Rights: Do you have a contractual right to access bias testing results, model documentation, and decision logs on request before a regulatory inquiry, not only during one?

When the stakes exceed your internal capacity:

  • AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.

  • 12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.

  • Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.

Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.

Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.

Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.

Chris Cook

Managing Partner & Founder

Blackbox Zero

Forwarded this by a colleague? Subscribe to Judgment Call

Next
Next

Stop Leading With Maximum Fines. Use Expected Value to Win the AI Governance Budget Conversation.