AI Voice Agents in Hiring Are Not Ready Without These Three Controls
Who decided what your voice agent just asked your candidates?
Your vendor did, and their certification of their own compliance doesn't help you.
The Situation
AI voice agents are now conducting phone screens and structured interviews at scale across industries, replacing the first step of manual recruitment with automated candidate interactions that run around the clock. The efficiency case is compelling, but while the vendor trained the model, you own the compliance obligation. The risk CCOs are facing right now is that voice agents can hallucinate and ask a prohibited question mid-conversation, before anyone on your team can intervene.
The Exposure
The EEOC has clarified in published guidance that an employer using a third-party AI selection tool can be held liable for adverse impact, even when the tool was designed and administered by an outside vendor. The EEOC's position is clear that a vendor's assurances about its own compliance don't substitute for the employer's independent obligation. New York City's Local Law 144 requires annual bias audits on automated employment decision tools, with civil penalties up to $1,500 per day per violation. Illinois HB 3773 extended anti-discrimination requirements and notice obligations to AI used in hiring across the full employment lifecycle. With hiring practices being highly regulated and applicant pools large, prohibited question complaints won't stay internal for long. And when regulators show up, they ask for your documentation, not your vendor's.
The Judgment Call
Leading AI agent vendors publish ethics frameworks and third-party bias audits representing adherence to EEOC guidelines. That's worth knowing, but it's not enough. Those audits cover the vendor's standard configuration against their aggregate customer base, not your implementation with your applicant pool in your jurisdictions. Don't deploy voice agents in any HR-related hiring workflow without three things in place: a suppression list of prohibited inquiry topics signed off by your employment counsel, a bias testing result on your specific implementation, and a human review step before any adverse employment decision is finalized.
Risk: Requiring independent bias testing will create friction with vendors whose configurations aren't designed to support it. Expect pushback framed as a technical limitation, and expect the compliant vendor list to shrink.
Benefit: A documented suppression list and pre-go-live bias testing result, signed off by counsel, are the difference between a defensible program and a standing liability. They're also the first documents a plaintiff's attorney will request.
This Week’s Action
What to do: Inventory every AI voice agent deployment in your HR workflow, including any tool touching candidate screening or structured interviews. If you don't have a complete list, pull it from HR and IT concurrently.
Who to involve: Your HR Partner, technology lead, and your employment counsel.
What outcome to achieve: Written confirmation that each deployed voice agent has a suppression list reviewed by counsel, and a bias testing result against your actual applicant population, or a documented remediation plan if either is missing.
Time required: 60 minutes to compile the inventory and pull vendor documentation; allow additional time for counsel review, which will depend on the complexity of your jurisdictions and the number of tools in scope.
Artifact
Before any AI voice agent goes live in a hiring or screening workflow, validate these 5 points. A single "No" without a remediation plan is a go-live blocker.
Step 1: Scope Does this voice agent interact with candidates or applicants in any hiring, screening, or promotion workflow?
→ Yes: Proceed to Step 2.
→ No: Outside this framework; standard AI governance review applies.
Step 2: Question Set Ownership Has your employment counsel produced a written suppression list for this agent's question-generation logic, specific to your jurisdictions?
→ Yes: Proceed to Step 3.
→ No: Stop; vendor ethics documentation isn’t sufficient.
Step 3: Hallucination Controls Does the vendor both log and flag agent output that deviates from the approved question set, and has your team reviewed that log since deployment?
→ Yes: Proceed to Step 4.
→ No: Stop; a vendor that can't support this capability warrants a hard conversation before you renew.
Step 4: Bias Testing Has bias testing been conducted on your specific implementation against your actual applicant population?
→ Yes: Proceed to Step 5.
→ No: Stop; vendor platform audits cover their product, not your deployment. Engage an independent auditor before go-live.
Step 5: Adverse Action Is there a documented human review step before any adverse employment decision is finalized?
→ Yes: Cleared; schedule annual re-review of all five steps.
→ No: Stop; no jurisdiction with AI employment law exposure permits an automated adverse action without a human checkpoint.
When the stakes exceed your internal capacity:
AI Exposure Diagnostic: A 2-hour strategic evaluation for risk, compliance, and legal leaders to identify your highest-priority governance gaps and deliver a 90-day remediation roadmap.
12-Week Governance Sprint: Translate regulatory requirements into audit-ready policies, control frameworks, and accountability structures.
Ongoing Advisory Retainer: Embedded judgment for policy updates, vendor assessments, and board prep as regulations and technology evolve.
Reply with "Diagnostic" or “Sprint” to schedule a conversation for next month.
Chris Cook writes Judgment Call weekly for compliance and risk officers navigating AI governance.
Former IBM Vice President and Deputy Chief Auditor. Published in the AI Journal, speaker at Yale.
Chris Cook
Managing Partner & Founder
Blackbox Zero
Forwarded this by a colleague? Subscribe to Judgment Call