Who Can Pull the Plug on a Harmful AI System Without a Committee Vote?

Should you bypass the approval committee when halting an AI process that’s making errors?

Approval is a committee act; halting mustn’t be. Most governance frameworks treat them the same.

The Situation

Most AI governance committees have a documented approval process including named chairs, voting quorums, and escalation paths. When an AI system is discovered to be giving harmful or wrong output, stopping it usually requires convening that same committee. The approval process is defined and collective; the stop process is undefined or orphaned.

The Exposure

Grant Thornton's 2026 AI Impact Survey found that only 20% of organizations have a tested AI incident response plan, but nearly three in four are giving agentic AI access to their data and processes. The same survey found 54% of COOs are worried about regulatory and compliance risk from agentic AI, whereas only 20% of CIOs and CTOs share the concern. COOs should be worried; they own the operational consequences when an AI system causes harm. CIOs and CTOs are accountable for making the technology work; COOs are accountable for what happens after. When no one has documented authority to stop a system, the person who deployed it keeps it running, and the person living with the consequences has no mechanism to intervene. That's a material weakness, and it's the first thing an auditor asks about when something goes wrong.

The Judgment Call

The impulse is to separate stop authority from the approval committee entirely, but that creates a different problem. In a well-governed setup, the CRO or CCO sits on the approval committee because risk and compliance should be at the table when systems are evaluated and green-lit. Removing them from the approvals in order to preserve their independence as stoppers actually weakens both functions. The better answer is to separate the decision rule, not the role. Make approval a collective act requiring a quorum. Make halting an individual act exercisable by a named risk officer, such as the CRO or CCO, or even the Chief Auditor, and make it based on pre-defined triggers, without committee consensus, and without needing any other permission. Write a one-page stop authority protocol, name the individual, define the triggers, and give the AI approval committee notification rights only.

  • Risk: The person who holds stop authority will be reluctant to use it because halting a production system is a visible, high-stakes act that they’re responsible for. The CIO and COO might also object since they won’t have consultation rights.

  • Benefit: A named individual with a defined trigger and a clear execution window stops a harmful system before the customer impact compounds.

This Week’s Action

  • What to do: Pull your current AI governance charter and identify whether a named individual has authority to halt a deployed AI system without committee consensus. If they don’t, then you have a gap. Draft a one-page stop authority protocol.

  • Who to involve: Your CRO, CCO, or CAE - whoever will hold the authority - working with General Counsel to validate the trigger language. Brief the leadership team on the protocol once drafted.

  • What outcome to achieve: A one-page protocol that separates the halt decision rule from the approval decision rule, with a named executive, documented triggers, defined execution window, and a post-halt review process.

  • Time required: 15 minutes to review the current charter; 45 minutes with the CRO or CCO and General Counsel to draft the protocol; 15 minutes to brief the leadership team.

Artifact

Use this as the starting framework for your one-page protocol. Fill in the blanks working with your CRO or CCO and General Counsel.

Named Stop Authority

The named stop authority for AI systems is _____________ (title and name), with _____________ (title and name) serving as delegate when the primary is unavailable. The stop authority holder does not report to the function that deploys AI systems.

Trigger Definitions

A halt is triggered when any of the following conditions is confirmed:

  1. Output bias affecting a protected class detected

  2. Material regulatory exposure identified by compliance, legal, or audit

  3. Unexplained output drift exceeding tolerances defined at approval

  4. Customer harm confirmed through complaints, escalations, or monitoring alerts

  5. Or _____________ (organization-specific trigger).

Execution Rules

The halt is an individual decision and does not require committee consensus. The maximum window from trigger identification to system halt is _____ hours. The AI approval committee and CIO/COO are notified after the halt is executed, not consulted before. 

Post Halt Review

The AI approval committee convenes within _____ business days to evaluate the halt and documents one of three outcomes: remediate and restore, remediate and re-approve, or halt for ongoing investigation. A written record of the halt rationale, trigger evidence, and committee decision is filed with _____________ (governance repository). 

Board Reporting

The board's Risk or Audit Committee receives a summary of any halt within _____ business days, and an annual summary of all halts, including nil reports if none occurred, is included in board risk reporting.


When the stakes exceed your internal capacity:

  • AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.

  • 12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.

  • Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.

Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.

Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.

Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.

Chris Cook

Managing Partner & Founder

Blackbox Zero

Forwarded this by a colleague?Subscribe to Judgment Call

Next
Next

Harmonizing to One AI Regulatory Standard Creates Gaps in Both Directions