Knowledge base
Definitive, citable answers on AI agent governance — from foundational definitions through regulatory mapping, implementation patterns, audit evidence, incident response, and the road ahead. Each answer is its own page so you can link directly to a specific question.
How to identify, quantify, and prioritize the risks created by autonomous AI agents.
1) Data exfiltration — agents accessing and transmitting sensitive data. 2) Prompt injection — adversaries hijacking agent behavior through crafted inputs…
Read answer →Use a structured pre-deployment evaluation covering: 1) Action scope — what can this agent do? (read-only vs. write vs. financial transactions). 2) Data access…
Read answer →Prompt injection is an attack where adversarial input causes an agent to deviate from its intended behavior — ignoring its system prompt, executing unauthorized actions, or…
Read answer →Frame it in terms they understand: 1) Regulatory exposure — map each agent to applicable regulations and calculate maximum penalty exposure. 2) Operational risk…
Read answer →Shadow AI is the deployment of AI agents by employees or teams without the knowledge or approval of IT, security, or compliance. It's an agent governance crisis because…
Read answer →Under current law in most jurisdictions, your company bears full liability for agent actions — there is no 'the AI did it' defense. Key precedents…
Read answer →Multi-agent systems create compound risk through: 1) Delegation chains — Agent A delegates to Agent B, which calls Agent C. Who authorized the final action? 2) Emergent behavior…
Read answer →Showing 7 questions in Risk Assessment. View all 42 →
See how teams inventory agents, enforce policies, and ship audit-ready evidence on one platform.