Agents aren't a new risk category
The questions have not changed, the speed and variance has
May 14, 2026 · 5 min read
The framing that agents introduce a category of risk that requires new disciplines doesn’t hold up. The part that is new is the speed and variance.
Agents have raised the profile of information security and that’s a positive. The AI governance conversation has raised the visibility of compliance for boards, for buyers, and for engineers. More care is being put into how systems handle data and how they handle delegated authority. It’s helping information security get the attention it deserves.
The questions have not changed
Anyone providing infrastructure has always had to answer four questions:
- Should this actor be able to do this?
- How will we know what they have done?
- Who can add new capabilities, and what should those capabilities be?
- How do we respect the data we have been given custody of?
These questions applied when the actor was a human at a terminal. They were the right questions when another system was invoking an API. They are still the right questions now the actor is an agent.
What is actually new
Humans gave us variance. They make unexpected choices, follow odd paths, and use systems in ways we couldn’t anticipate. APIs gave us speed. They execute the same paths a thousand times before a human notices anything is off. Agents combine both. They are fast and varied at the same time, and that combination is the part that is new.
This surfaces in three areas:
Authentication
Tokens were designed for human-shaped or API-shaped use.
A human signs in to create a session, they have high agency but limited operations per day. An API key is held by a system and identifies that system, it has low agency but performs a potentially limitless number of operations.
Agents straddle both, and today they are often borrowing credentials from one of the other two classes of actor, so they can act relentlessly with high agency.
Authorization
Humans and systems broadly get given the same capabilities and the controls really come from intuition and self-preservation, or code review.
Agents do not have a job to lose, and it’s not feasible to review every action they may or may not take. This is driving the shift towards fine-grained authorization (FGA). An eternal binary grant does not make sense for agents.
Grants for agents need to be time-limited, count-limited, or scoped to a specific context.
Risk limits
Rate limits exist to protect infrastructure from being overwhelmed. They broadly cap throughput from one actor such that they don’t negatively impact the infrastructure for other actors.
What traditional rate limits don’t truly protect is the data the infrastructure is a custodian of.
An agent that stays within its rate limit can still cause significant harm if each operation it performs has consequence.
Alongside fine-grained authorization, I think we need a primitive we don’t yet have: a risk limit. Not all operations are created equal, so rather than only limiting velocity of operation, we need to cap risk exposure.
The odd, high-risk operation is likely fine, but many in a short period would at least deserve a second look. Many high-risk or bulk operations may not be prevented, but submitted as requests. These could then be reviewed by a human or a supervising system before execution.
This is useful for teams and systems, but becomes important for agents to be able to progress effectively and safely.
Rate limits protect infrastructure. Risk limits protect data.
Defect discipline matters more
Cronofy holds a “no known bugs” engineering principle. I’ve written about this before 1.
Even with that discipline, bugs aren’t eliminated nor resolved instantly. A bug is found, triaged, and fixed within hours or days. In the window between discovery and fix, the standard mitigation is a human-readable workaround: a note to a customer, a suggestion to avoid a specific path until the fix ships.
That mitigation doesn’t reliably translate to an agent. Agents don’t read workaround notes. They will execute the affected path at the rate they run, every time, until the underlying defect is fixed. The window between “we found it” and “we shipped the fix” was always a liability. With humans on the other end, it was a liability we could absorb. With agents on the other end, it’s a liability that compounds at the speed the agent operates.
That makes defect discipline more important under agents, not less. The cost of carrying bugs has gone up. The time to resolution is even more important.
“Why did that happen?”
With agents in the mix, answering “why did that happen?” becomes harder. Recording not just the “what happened”, but also the how and why has long been best practice. It is now necessary. There’s no human to interview about intent, no code to read. The system’s record is the only record.
Teams could previously piece things together based on what humans reported and what code revealed. Now the system has to carry that weight on its own.
“Why did that happen?” is the oldest question in software. Agents are the first actor that cannot help you answer it. That’s the risk, but it is not new.
Hey, I’m Garry Shutler
CTO and co-founder of Cronofy.
Husband, father, and cyclist. Proponent of the Oxford comma.