AI Policies Without Enforcement Create Bigger Liability Than Having No Policy

Is your AI policy creating evidence for regulators to use against you?

If your operational reality doesn't match your written policy, you haven't built a governance framework. What you've built is a documented roadmap for a regulatory audit or a class-action lawsuit.

The Situation

You have an AI policy. Your team isn't following it. That's worse than having no policy at all because you've created documented evidence of negligence. That document sits on the intranet, but your marketing team is likely using an unvetted LLM to draft client communications, or your engineers are testing open-source models with live data sets.

The Exposure

In a regulatory inquiry, a policy you don't enforce is worse than having no policy at all. It proves you recognized the risk and defined a control, but willfully failed to implement it - which shifts the conversation from "unforeseen risk" to "knowing negligence." For those of us in regulated industries, this gap is where the maximum fines and personal liability for named officers resides.

The Judgment Call

If you have a policy clause that isn't being actively audited, you’re building legal liability rather than governance. Pause new policy expansion immediately; it doesn't matter how sophisticated your 30-page AI Ethics document is if your team can't prove they're following the first five. You're better off with three rules you strictly enforce than thirty rules that serve as a blueprint for the regulators to demonstrate your failure.

  • Risk: Expect pushback from Legal or teams who believe that “more documentation = more safety”.

  • Benefit: You close the negligence gap and ensure you're actually audit-ready, before a regulator asks for your logs.

This Week’s Action

  • What to do: Perform a single-clause stress test. Pick one unambiguous rule from your policy - like the prohibition of PII in public LLMs - and ask to see the specific software logs that prove it’s being blocked or monitored.

  • Who to involve: Your CISO or a lead Data Architect.

  • What outcome to achieve: Determine if the evidence exists today; if it doesn't, that policy clause is currently a liability.

  • Time required: <45 minutes.

Artifact

Can your team produce these four items by the end of the week? If they can’t, your policy is just a list of aspirations.

  • The Shadow AI Scan: A list of all unauthorized LLM API calls detected at the firewall over the last 30 days.

  • The Permission Log: Evidence of explicit consent for any customer data currently being used in internal models.

  • The Human-in-the-Loop Audit: A timestamped log showing a human reviewer approved the output of high-risk AI-generated customer communications.

  • The Vendor "Kill Switch": Documentation of the technical process to immediately purge your data from a third-party AI provider’s environment.

When the stakes exceed your internal capacity:

  • AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.

  • 12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.

  • Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.

Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.

Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.

Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.

Chris Cook

Managing Partner & Founder

Blackbox Zero

Forwarded this by a colleague? Subscribe to Judgment Call

Previous
Previous

The 48-Hour Evidence Rule: Can You Prove Your AI Controls Work?