Shellye Archambeau built Ignite Ambition to close the mentorship gap for early and mid-career professionals facing consequential career decisions. This fireside chat covered career navigation, leading through complexity, and the transition from senior executive to founder — including why AI governance is the defining leadership competency of this decade.
Most enterprises are still closing foundational governance gaps — but agentic AI is already moving into production, and tiered LLM-SLM architectures are next. Presented at the Corporate Governance & Ethics in the Age of AI conference, this session covers what boards and risk leaders need to prepare for before accountability diffuses across the model stack.
88% of firms use AI, but traditional audit procedures weren't built for systems that make decisions through statistical patterns no one can fully retrace. Published in Corporate Board Member, this piece examines the governance inflection point boards are facing — and why waiting for federal regulation is no longer a viable risk strategy.
Chris Cook has been named an Operating Partner at Altos Equity Partners. The role deepens Blackbox Zero's visibility into how investors pressure-test AI and data governance during diligence — and feeds directly into sharper, faster advisory work for clients building AI capabilities responsibly.
79% of senior executives report AI agents are already deployed in their organizations. Unlike copilots, agents plan tasks, call APIs, and execute end-to-end actions in live systems — shifting the risk from bad output to bad action. This brief covers the three controls every regulated organization needs before go-live.
AI governance is being misframed as a compliance tax. For regulated companies deploying AI in high-stakes decisioning, audit-ready controls aren't overhead — they accelerate board approvals, win procurement confidence, and protect brand value. Published in The AI Journal.
46% of employees have uploaded sensitive company data into public AI tools. In regulated environments, that's not a behavior problem — it's a governance gap. This brief covers three concrete actions that take you from guessing to measurable control.
GPT-5's own model card shows a 26% error rate. When AI gets it wrong, most people ask the wrong follow-up — and get a confident, misleading answer in return. This brief covers the one question to avoid and two alternatives that produce a more reliable, audit-friendly workflow.
I presented at Yale School of Management on what actually works when AI moves from theory into regulated, high-stakes environments. The session covered the PRIME framework — a practical operating system for evaluating AI use cases, designing controls, and building the evidence trail that boards and regulators require.