Garry Shutler

The Pareto Flip

How agentic coding is changing my calculus on product delivery

April 25, 2026 · 6 min read

I’ve been working in software for over twenty years. In that time, nothing has changed my calculus on product delivery in the way that working with Claude has over the past few months.

I was a sceptic for a long time. The early enthusiasm came largely from people who, in many cases, didn’t know how to develop software in the first place.

None of the people I’d consider peers, or whose opinions I particularly respected, were saying anything similar. So I was content to wait. I took the view that when these tools change, I’d be able to pick them up when the time was right. It’s a variation on the wait calculation where you don’t lose out by waiting a bit longer to launch.

Now is the time to launch.

When it started to click

In the second half of 2025, two things shifted at once.

Agents became materially more capable at understanding intent. A lot of the problems we’d been trying to solve at Cronofy around interpreting what people meant when they expressed availability. Cases where it had previously felt like forcing a square peg into a round hole just started to work. The agents actually understood.

MCP became established as a common interface. One of the persistent challenges with this class of problem had been getting free-text input into something well-shaped for a system to consume. MCP gave us a standardised interface for agents to talk to systems. It’s easy to underestimate how much that level of standardisation unlocks, and a lot of things fell into place at once.

People I respected started to take a serious second look, and reported things had changed. I had thought it was poor before, needing more time editing than the time “saved”, but I needed to look again.

Initially this was a cautious look. I wanted to understand the tooling myself before letting it loose across the team; get a grasp of its strengths and weakness so I could guide how it would be applied, and flag the areas of risk. The plan was to spend a couple of weeks evaluating it before gradually cascading it down through the department.

Within a few working days we were talking about how quickly we should roll it out to everyone. It was that good this time.

A capable engineer with zero ego

It’s difficult to avoid anthropomorphising these tools. You converse with them so it is like talking to a colleague.

The common framing is that working with one of these agents is like working with a very capable senior engineer who never pushes back. That captures something real, but it also undersells it in an important way.

It is genuinely capable. It knows a lot of patterns. It will produce something roughly the right shape, often very quickly. But it doesn’t always know which pattern to apply, or when. So you look at the output and say “this would be better done another way” and it’ll… just do that. There’s no debate like you would get with a peer.

That absence of emotional attachment to the code changes things. Engineers, myself included, get invested in code we’ve written. But no-one has written this code. An agent will happily throw an implementation away and produce a completely different approach two minutes later, and so long as the goal was well-defined for the first attempt, that new approach will achieve it too. Possibly not in the way you’d have foreseen, and possibly not in a way that ages well, but it will achieve it.

I’ve also seen it do the equivalent of stacking plasters on top of a wound that needs stitches. If you point that out: “this should be stitches, not more plasters”, it’ll happily go and do the stitches. But you have to be the one to notice. Understanding the problem, and the better way to solve it not just now but in six months’ time, is still a human job.

Where humans still matter most

A system’s codebase has a shape. There’s a vision of where it is, where it needs to be, and how it should transform between those two states. Even with a million tokens of context, the model has very little sense of that wider, inherently human, context.

What agents are very good at is the point-to-point transformation: this state to that state, when given clear direction. They have no real grasp of the rest of the space.

That’s where the human comes in: holding the shape of the system as a whole, and guiding the transformations from one state to the next in the most effective order.

The Pareto flip

The thing literally keeping me up at night is how much my calculus has changed. So much of “we can’t justify doing that, it’s too expensive” no longer applies. The 80/20 rule has flipped. Rather than 20% thought and 80% implementation, the thought now accounts for 80% of the effort.

That inversion turns a lot of previous decisions on their head.

A concrete example from the last week: there was a piece of system tooling I’d have liked to refine. Six months ago, that was a couple of days of focussed work I couldn’t easily justify. Now I write a careful prompt, ask Claude Code to plan and execute it, and check back in ten minutes. Fifteen minutes of my attention as a background task, against two days of focussed work previously.

The implication that interests me most isn’t “we can ship more product?”, it’s that we can do the work better. Doing software well has always been expensive. We pride ourselves at Cronofy on doing it well, and it pays back over the long term, but we still compromise on some things because the cost is real.

The set of things worth compromising on is shrinking rapidly.

What does this mean for how we organise?

If the cost of doing things well has dropped by an order of magnitude, a lot of assumptions about how software companies are structured need re-examining.

We’ve historically grown teams to make more possible. Teams are not simply typists, but the mechanical act of typing the code in our minds was a non-zero amount of the effort required to build product. That part is now much closer to zero.

We now need to think about cognitive capacity more than ever. The cognitive capacity of agents scales horizontally; add an agent, get a linear amount of capacity added to the system. However, teams of humans need to orchestrate these agents and we can only focus on a few significant threads at a time at best.

The cognitive capacity of a team of humans doesn’t scale linearly with its size, yet the bottleneck feels like it is shifting to the context the collective can hold in its head. If that holds, to take advantage of these tools effectively I see a shift to smaller, cellular teams.

Today, we still need close oversight to not end up with a quagmire of a system in six months. Tomorrow that may not be the case. Cellular teams are also well-positioned to adapt to whatever the future brings.


There’s a lot in flux right now, the dust has barely been kicked up, never mind settled. I’m far from an expert user of these tools, but I’ve used them just enough to be confident the world has shifted, and to want to think carefully about what comes next.


Photo of Garry Shutler

Hey, I’m Garry Shutler

CTO and co-founder of Cronofy.

Husband, father, and cyclist. Proponent of the Oxford comma.