Shadow AI Is a CCO Problem, Not an IT Problem

Is your shadow AI prohibition enforceable, or just documented?

Shadow AI governance fails at the org chart, not the policy.

The Situation

Shadow AI is already inside your organization. Employees aren't circumventing policy out of defiance, they're trying to do their jobs better - but have also created a structural governance gap. Your firewall catches the endpoints IT has catalogued, but it doesn't catch browser extensions routing company data to a third-party service that uses AI, an esoteric LLM subscription purchased on a corporate card, or an AI feature quietly added to a SaaS platform you already use.

The Exposure

Shadow AI is hard to control because it’s fundamentally a human and process problem, not a technical problem. Yet most governance programs try to manage it through the CIO. In regulated industries, the exposure is evidentiary as much as operational: when regulators ask what you’re doing about any unapproved AI tools in your organization, a policy prohibiting it doesn't substitute for control evidence. IBM's 2025 Cost of a Data Breach Report found that 1 in 5 organizations reported a breach due to shadow AI. For financial services and healthcare firms, "we have a policy that prohibits it" is an answer that invites the follow-up question, and a governance program needs to demonstrate broad-based prevention.

The Judgment Call

The de facto practice in most organizations today is to hand it to the CIO. That's the wrong owner. Technical controls catch known endpoints. They don't catch AI feature updates embedded in sanctioned SaaS tools, and they don't catch employees using AI on personal devices. This is too important to place outside of a Chief Risk Officer or Chief Compliance Officer. Yes, IT owns the network and endpoint controls, but a comprehensive program needs to be broader. HR is integral, and should issue an anonymous employee survey to assess the current situation, and develop an annual employee education and attestation program. Finance should validate employee purchases made through a corporate credit card spending review, and work with procurement to identify every software contract. A general shadow AI policy prohibition without that comprehensive structure doesn't protect you; it documents that you knew the risk existed but didn't try to manage it.

  • Risk: Coordinating across IT, HR, and Finance, diffuses accountability if ownership isn't explicit from the start.

  • Benefit: A multi-layer detection framework produces audit evidence that a policy alone can't, and minimizes the risk of IP leakage and breaches.

This Week’s Action

  • What to do: Conduct a four-layer shadow AI audit covering IT detection scope, corporate card and expense transactions (search for OpenAI, ChatGPT, Anthropic, Claude, Perplexity, Grok, Canva, and Notion AI), and survey employees on AI tool usage.

  • Who to involve: CIO/CISO, CFO, and HR. Review findings with counsel before deploying updated employee attestation language.

  • What outcome to achieve: Discovery and reduction of shadow AI in your organization, clear ownership of each detection layer, and which gaps require remediation before your next board or risk committee review.

  • Time required: 90 minutes (45 minutes to scope the audit and assign ownership; 45 minutes to review findings two weeks later).

Artifact

Each layer of defense is owned by a different function, and a gap in any one of them leaves your governance program incomplete. Use this checklist to assess your current state and assign remediation ownership.

Layer 1 — IT/CIO

  • Known public LLM and MCP endpoints blocked at the firewall

  • Browser extensions audited on managed devices; unsanctioned extensions restricted

  • BYOD policy addresses AI tool usage explicitly

Layer 2 — HR

  • Anonymous staff survey deployed to understand what AI tools employees are using and why

  • Annual employee certification updated to include AI acceptable use attestation

  • AI policy acknowledgment included in new hire onboarding

Layer 3 — Finance

  • Corporate card and expense data reviewed for AI-related subscription spend (last 90 days minimum)

  • SaaS procurement agreements reviewed for AI features added without separate approval

Layer 4 — Governance (CCO/CRO)

  • Formal ownership assigned in writing across IT, HR, and Finance detection layers

  • Escalation path documented for discovered violations

  • Multi-layer detection results aggregated for board and risk committee reporting

  • Annual attestation results reported as a governance metric

  • Sanctioned AI alternatives documented in the organization's AI policy

When the stakes exceed your internal capacity:

  • AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.

  • 12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.

  • Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.

Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.

Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.

Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.

Chris Cook

Managing Partner & Founder

Blackbox Zero

Forwarded this by a colleague? Subscribe to Judgment Call

Next
Next

Stop Managing AI Risk for Your Board. Start Allocating Capital Instead.