The GRASP Framework
The GRASP framework provides a structured approach to reason about AI agent risk. The goal is not to eliminate risk, but to make it explicit, understood, and consciously accepted so governance can accelerate delivery instead of blocking it.
G — Governance
How the agent was developed and approved: Evaluations and testing, SDLC integration, observability and audit trails.
R — Reach
What the agent can access: Explicit tools and APIs, indirect access via system, network, or shell permissions.
A — Agency
How autonomous the agent is: Human-in-the-loop points, triggers and execution boundaries, decision-making authority.
S — Safeguards
What limits the blast radius: Reversible actions, scoped permissions, controls that prevent incidents from becoming catastrophic.
P — Potential Impact
The worst-case impact to the business: Financial, operational, security or reputational.
GRASP provides a shared language for engineers, executives, and risk stakeholders to reason about AI agents with clarity, align on acceptable risk, and unlock faster execution.