You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RSAC 2026 dropped some numbers this week that are worth sitting with.
85% of organizations have adopted agentic AI in some form. Only 5% have it running at production scale. And 63% say they can't enforce purpose limits on their agents. (Source: TechRepublic coverage of RSAC 2026.)
That 63% figure is the one that sticks. It means the majority of teams deploying agents have no mechanism to say "this agent is allowed to query the database but not send emails" and actually enforce it at runtime. The agent's scope is defined in a prompt or a README, not in a policy engine that evaluates every action.
SiliconANGLE called it "the agentic wild west." USDM's post-RSAC writeup described agents as "the ultimate insider threat." Both are pointing at the same gap: agents act, and nobody reviews what they're doing before they do it.
This is the exact problem we built SidClaw to address. The policy engine evaluates every tool call against a priority-ordered rule set. If an agent tries to exceed its purpose limits, the action gets denied or held for human approval. The 63% who can't enforce purpose limits today could enforce them with a withGovernance() wrapper and a policy file.
But the harder question isn't technical. It's organizational. Who defines the purpose limits? Who reviews the approval queue? RSAC surfaced the problem clearly. The industry hasn't converged on answers yet.
What's your team's approach to scoping agent permissions? Are you relying on prompt instructions, IAM roles, or something else?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
RSAC 2026 dropped some numbers this week that are worth sitting with.
85% of organizations have adopted agentic AI in some form. Only 5% have it running at production scale. And 63% say they can't enforce purpose limits on their agents. (Source: TechRepublic coverage of RSAC 2026.)
That 63% figure is the one that sticks. It means the majority of teams deploying agents have no mechanism to say "this agent is allowed to query the database but not send emails" and actually enforce it at runtime. The agent's scope is defined in a prompt or a README, not in a policy engine that evaluates every action.
SiliconANGLE called it "the agentic wild west." USDM's post-RSAC writeup described agents as "the ultimate insider threat." Both are pointing at the same gap: agents act, and nobody reviews what they're doing before they do it.
This is the exact problem we built SidClaw to address. The policy engine evaluates every tool call against a priority-ordered rule set. If an agent tries to exceed its purpose limits, the action gets denied or held for human approval. The 63% who can't enforce purpose limits today could enforce them with a
withGovernance()wrapper and a policy file.But the harder question isn't technical. It's organizational. Who defines the purpose limits? Who reviews the approval queue? RSAC surfaced the problem clearly. The industry hasn't converged on answers yet.
What's your team's approach to scoping agent permissions? Are you relying on prompt instructions, IAM roles, or something else?
Beta Was this translation helpful? Give feedback.
All reactions