Your AI Policy Approves the Tool. It Doesn't Approve the Use Case.

Your AI policy names approved tools, but does it say anything about use-cases?

Most AI policies govern which tools are allowed, but almost none govern which decisions those tools are allowed to inform.

The Situation

HR teams were among the first to adopt AI internally, and are now among the most comfortable users of it. So when resource reduction targets are initiated by the CFO or CEO, they approach it the same way they've been encouraged to approach everything else: they use the corporate sanctioned tool, upload headcount, performance reviews, and productivity data, and use it to identify suggestions for which cuts deliver the most savings with the least impact. The gap is that sanctioning an AI tool and sanctioning an AI use case are two different governance decisions, and most AI policies only address the first one. By the time anyone realizes the difference, the CFO has seen the targets, the governance risk is embedded in the scenarios, and a decision may even have already been made.

The Exposure

"Human in the loop" is the most operationally dangerous phrase in enterprise AI governance right now, because it leads decision-makers to conclude that personal sign-off at the end of a process insulates the organization from anything the AI did upstream. The EEOC's May 2023 guidance on AI and adverse impact under Title VII states explicitly that an employer can be liable for disparate impact when an algorithmic tool informs an employment decision, regardless of whether a human made the final call. When a general-purpose LLM produces prioritized reduction target recommendations, usually by geography or business unit, it's functioning as an Automated Employment Decision Tool (AEDT) under the definitions in NYC Local Law 144 and the Illinois Human Rights Act amendment, whether the HR team thinks of it that way or not. And unlike purpose-built workforce analytics software, there's no error flag, no confidence range, and no audit trail - just a confidently asserted recommendation that’s now part of the evidence trail supporting a multimillion-dollar reduction.

The Judgment Call

The verdict isn't that LLMs can’t be used, it's that using one without a governance layer specific to that use case is how a defensible business decision becomes a class action lawsuit. The AI policy that blessed your AI tool for general use didn't conduct a bias audit on its deployment for workforce planning, didn't make the required AEDT disclosures to employees in covered jurisdictions, and didn't document the human oversight protocol. If your organization is using a general-purpose LLM for sensitive or protected decisions then the governance layer has to travel with it: a documented use case review before the analysis runs, a record of what data was ingested and what the model was asked, a bias check on the output, and a paper trail showing that a human with appropriate authority evaluated the recommendation independently. That's not a prohibitive standard; it's the difference between a process that can withstand scrutiny and one that can't.

  • Risk: Employees who are using approved tools in their daily roles will be confused about when they need to get a use-case-dependent governance review, and may forgo using AI for essential projects if it seems too difficult or confusing.

  • Benefit: A documented use case review completed before the analysis is run lets the organization use AI for deep planning while creating the only evidentiary record that matters if a disparate impact claim follows.

This Week’s Action

  • What to do: Review your current AI use policy and ensure it distinguishes between general productivity uses and protected decision categories such as hiring, promotions, compensation, or workforce reductions.

  • Who to involve: General Counsel / outside employment counsel to confirm which use cases trigger AEDT obligations; CHRO to inventory where AI is being used in any people-decision workflow.

  • What outcome to achieve: A half-page addendum to your existing AI use policy that identifies top protected decision categories in your firm requiring a use case governance review before any AI tool is applied, with a simple intake process to follow when those situations arise.

  • Time required: 90 minutes (45 minutes to review your current policy and protected category list; 45 minutes with counsel to confirm jurisdictional requirements and validate the intake process.)

Artifact

Use this before applying any sanctioned AI tool to a business decision that affects people, resources, or access.

Q1: Does this decision materially affect a person's employment, compensation, vendor status, or access to products or services?
→ NO: Proceed; standard AI use policy applies
→ YES: Continue to Q2

Q2: Are you using the AI tool to identify, rank, score, or recommend specific individuals, groups, or entities as targets of that decision?
→ NO: Proceed; standard AI use policy applies
→ YES: Continue to Q3

Q3: Is there a documented use-case governance review on file for this specific application of this tool?
→ YES: Proceed; attach the governance review to the decision records
→ NO: Stop; use-case governance review required - contact Legal or your AI Governance function to initiate one

If you're unsure how to answer any of these questions, that’s the signal to stop and escalate. A governed process asks these questions before the output reaches a decision-maker, not after.


When the stakes exceed your internal capacity:

  • AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.

  • 12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.

  • Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.

Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.

Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.

Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.

Chris Cook

Managing Partner & Founder

Blackbox Zero

Forwarded this by a colleague? Subscribe to Judgment Call

Next
Next

AI Voice Agents in Hiring Are Not Ready Without These Three Controls