GRASP + CRI: Fast Judgment, Real Controls
GRASP is how you decide. CRI’s FS AI RMF is how you prove. Here’s the operating model that avoids both checkbox theater and vague anxiety.
The Treasury’s new AI risk resources for banks — an AI Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF) — are a real signal: the era of “we’re experimenting” is ending, and the era of “show me your controls” has arrived. (U.S. Treasury press release, AIEOG deliverables)
But this moment also produces a predictable failure mode: large frameworks add enough friction to misalign incentives for implementation teams, creating a gap between what the framework records and what the system actually does in production.
Here’s the right mental model:
- GRASP is the fast judgment layer: a shared language to decide what risk posture a specific agent (or agent system) actually has.
- CRI’s Financial Services AI Risk & Control Matrix is the control library and evidence layer: a deep catalog of control objectives, implementation guidance, and “what good looks like.”
You don’t pick one. You sequence them.
The 325-control paradox
We reviewed the full CRI matrix spreadsheet. In the “control objectives” view it enumerates hundreds of objectives, and in the “examples + evidence” view it expands into over a thousand concrete examples and evidence artifacts.
This is both valuable and dangerous.
- Valuable, because it operationalizes governance into implementable controls and defensible evidence.
- Dangerous, because if the on-ramp is too heavy, teams route around it — and your “AI governance program” becomes a fiction.
In other words: the bigger the framework, the more you need a small framework to decide what matters for this system right now.
What GRASP does that a control matrix can’t
GRASP exists for a reason: most agent risk doesn’t come from “AI” as a concept. It comes from ordinary engineering choices made at runtime.
- Reach: what can it touch (credentials, networks, PII, internal tools)?
- Agency: what can it do without approval (write actions, tool execution, delegation)?
- Safeguards: what stops harm when it’s wrong (circuit breakers, canaries, rollback)?
- Governance: can we observe and intervene (logs, ownership, kill switches)?
- Potential Damage: what’s the credible worst case (blast radius, irreversibility, detection latency)?
A control matrix can enumerate great practices. But it cannot, by itself, force the organization to answer: “what does this agent actually have permission to do in this environment?”
This is the same core theme we made in The Shadow is Cast by the Object: in 2026, the hard problem isn’t just unapproved tools — it’s approved tools being used in unapproved ways. A big framework won’t catch that drift unless it is paired with a runtime-aware posture model.
What CRI provides that GRASP intentionally doesn’t
GRASP is a decision framework. It’s not trying to be a full GRC control library.
The CRI matrix does the heavy lifting that most teams don’t want to reinvent:
- Control objectives with IDs and hierarchy (so you can build policy structures and map ownership).
- Risk names and risk statements (so you can explain “why this control exists” in audit language).
- Implementation guidelines (so teams can actually build the thing).
- Examples of effective evidence (so you can prove it’s working, not just asserted).
- Maturity staging (so you can roadmap toward “embedded” controls instead of boiling the ocean).
Put bluntly: CRI is how you avoid blank-page syndrome when the regulator (or board) asks, “OK, show me what you did.”
The operating model: sequence small → large → small
Here’s how the two frameworks work together without creating bureaucracy theater.
- Start with GRASP (10–15 minutes). Describe the agent/system as deployed today. Be pessimistic. Treat tool access as real power.
- Pick the CRI slice that matches the GRASP shape. High Reach + High Agency demands deeper governance, stronger safeguards, and richer evidence. Low Reach + Low Agency gets a lighter slice.
- Implement controls as product work, not paperwork. Build the control into the workflow (defaults, guardrails, circuit breakers), and automate evidence capture (logs, config snapshots, approvals) so the “record” is emitted by the system—not reconstructed after the fact.
- Re-GRASP on configuration change. New tool? New data source? New autonomy? The posture changed. Re-scope the CRI slice.
This sequencing solves the core tension: GRASP keeps governance low-friction, while CRI makes it defensible.
A concrete example
Suppose you have an agent that drafts customer-facing communications and can trigger downstream workflows. GRASP immediately forces the questions that matter:
- Reach: does it see account data? can it call internal systems? does it touch PII?
- Agency: can it send messages without approval? can it open cases? can it execute actions?
- Safeguards: do you have circuit breakers when outputs deviate? can you shut it down fast?
Once you know the posture, CRI becomes useful in a different way: it provides the control objectives, guidelines, and evidence artifacts that turn “we thought about it” into “we built it.”
Conclusion
Frameworks aren’t rivals. They’re layers.
- GRASP gives you the fast, runtime-aware judgment call.
- CRI gives you the deep controls and evidence to operationalize that judgment at scale.