Stop Leading With Maximum Fines. Use Expected Value to Win the AI Governance Budget Conversation.

Are you losing the AI governance budget conversation before it even starts?

A number that sounds like an outlier gets treated like one.

The Situation

Most compliance teams make a mistake when presenting AI risk to leadership: they lead with the maximum statutory penalty. California's CCPA imposes up to $7,988 per violation where your firm knew or should have known the practice was non-compliant, with each affected consumer counted separately. Colorado's AI Act, effective June 30, 2026, adds up to $20,000 per affected consumer for consequential AI decisions in employment, lending, healthcare, and insurance. A firm whose model is found non-compliant doesn't face a single fine, it faces those numbers multiplied by every consumer the model touched. Your CFO files that number alongside asteroid strike scenarios. Expected value doesn't have that problem.

The Exposure

The enforcement record of the last eighteen months makes the probability argument concrete. A Massachusetts AG settlement with a student lender for AI-driven loan decisions cost $2.5 million. Clearview AI's penalty for illegal facial data collection reached €30.5 million. Mobley v. Workday, certified as a nationwide collective action in May 2025 with an opt-in window closing this month, could expose every company using Workday's AI hiring tools to per-consumer damages across millions of applicants. These aren't outliers anymore, they’re the emerging baseline for organizations that deployed AI without documented controls. The likelihood that your firm is next just got meaningfully higher, and that moves your expected value in one direction.

The Judgment Call

The expected value framework is straightforward. Multiply the probability of an AI failure event by its total financial impact (litigation exposure, regulatory fines, remediation costs, and reputational damage). A 10% probability of a $2 million event represents $200,000 in potential risk exposure that isn’t anywhere in your current planning assumptions. Map that against the cost of your AI governance infrastructure for the ROI. AI governance should cost less than the risk it mitigates, and using this basis you’ll have moved the conversation from regulatory obligation and budget constraints to cost-effective risk management - a language that’s entirely natural for your CFO.

  • Risk: Building this model requires probability estimates your team may not have. Base them on your firm's specific AI applications and how closely they resemble the business profiles that have already attracted regulatory attention. That alignment is your most defensible starting point.

  • Benefit: A strong cost/benefit analysis gives you a credible argument in budget discussions, one that shifts the conversation from "how much more budget are you asking for this year because of AI?" to "how much risk are we carrying, and what’s the right level of investment to reduce it?"

This Week’s Action

  • What to do: Select your single highest-risk AI use case and draft a one-paragraph expected value statement: identify the failure scenario, estimate its probability range using industry enforcement activity in your sector, and multiply it against a conservative total impact figure that includes litigation, regulatory, and remediation costs.

  • Who to involve: Your legal team and a financial analyst. Legal input on litigation exposure and regulatory probability is what separates a defensible estimate from a number someone will immediately challenge. Your financial analyst provides accurate cost-per-person-year figures and, critically, gives your CFO a familiar internal voice who has reviewed and signed off on the assumptions.

  • What outcome to achieve: A one-page expected value summary for your highest-risk AI model, suitable for presentation to your CFO as a standing line item in your risk register and direct support for AI governance funding.

  • Time required: 90 minutes to draft your initial model; allow an additional week to gather legal and financial input before finalizing the numbers.

Artifact

Use this structure to build your first expected value model. Some items may require input you don't yet have; identifying those gaps is part of the exercise.

  1. The Failure Scenario: Describe the specific AI failure event, biased output, data leak, hallucinated advice, unsupervised decision, in one sentence. Vague scenarios produce less credible numbers.

  2. Peer Enforcement Case Reference: Identify at least one enforcement action or class action from your sector in the last 24 months whose underlying AI application most closely resembles yours. Similarity of use-case is your most defensible basis for a probability estimate.

  3. Probability Range: Assign three estimates, low, most likely, and high, based on how closely your AI application and deployment practices resemble the business profiles that have already attracted regulatory or legal action.

  4. Total Impact Estimate: Sum the litigation exposure, regulatory fine estimate, remediation spend, and a conservative reputational impact figure (often modeled as a percentage of revenue) to derive a total impact reference cost.

  5. Expected Value Calculation: Multiply your most likely probability by your total impact estimate. This is the number that belongs in your risk register and in front of your CFO. Calculate the high/low cases so you can range the boundaries during your discussion.

  6. The Headcount Conversion: Divide your expected value by the AI governance team’s fully loaded annual cost per person. If your current team is smaller than that number, you’re carrying more AI risk than your firm should, and you now have the math to demonstrate it.

When the stakes exceed your internal capacity:

  • AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.

  • 12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.

  • Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.

Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.

Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.

Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.

Chris Cook

Managing Partner & Founder

Blackbox Zero

Forwarded this by a colleague? Subscribe to Judgment Call

Next
Next

Your Governance Gap Is Costing You Enterprise Deals