Insights

When Coding Becomes Easy, Systems Matter More Than Ever

When Coding Becomes Easy, Systems Matter More Than Ever

Luciano Zeman

Writing code has gotten dramatically easier in just the last few months. Tasks that used to take hours or even days now take minutes. Complex bugs, race conditions, weird edge cases that once required long debugging sessions — AI can often spot them in seconds.

You'd expect this to massively accelerate engineering teams. The logic is clear: if writing code is easier, we should see much faster PR cycle times and higher output. But when you look at what's actually happening across the industry, a very different story is playing out — and the reason has nothing to do with AI itself.

The Two Types of Teams That Have Emerged With AI Adoption

Building Span gives me access to unified data that reveals how many engineering teams actually work. When I dive into that data, I notice two very distinct scenarios that keep playing out among teams that adopt AI.

On one hand, there are teams whose cycle times are dropping sharply and they're shipping faster than ever without sacrificing quality. Teams who fit this profile are continuously shipping new features without being caught in prolonged code reviews, bug quashing, or coordination efforts that take away from delivery. This is the ideal state for every engineering team committed to AI-driven development.

On the other hand, most teams look the same as before. Many are actually worse off but they can’t pinpoint why, exactly, they’re not seeing the results they wanted with AI.

This doesn't have much to do with who's better at using AI or who writes the best prompts. Rather, there's something more systemic at play.

Writing Code Was Never the Real Bottleneck

We can use a simple mental model to understand why these two scenarios are happening: Eliyahu Goldratt’s Theory of Constraints.

The theory states that every system has a bottleneck, and that bottleneck determines the throughput of the entire system. If you try to optimize anything other than the bottleneck, the system doesn't improve. In many cases, you can actually make it worse.

Imagine you're at an airport where the security checkpoint can process 200 passengers per hour, but passport control can only handle 80. Even if you increase the security checkpoint's capacity to 300, the queue still piles up at passport control. The only way to improve this system is to fix the real bottleneck: the number of passengers that can go through passport control every hour.

This is what's happening today with many software teams as they increase AI usage. AI is drastically optimizing something that, in many cases, was never the bottleneck: writing code.

Writing code always felt like the slowest part of the job because it required so much focus, mental effort, and time. But while a feature may take a week to reach production, writing the code for that feature likely only took a day and a half.

Where did the rest of the time go? It went into:

  • Clarifying requirements

  • Waiting for the right decision makers (don't even get me started if someone left for vacation)

  • Code reviews that sat untouched for days

  • CIs that took hours to run

  • Resolving conflicts with the main branch that forced you to revisit what you already finished

  • Deploys that could only happen on certain days of the week

In the end, you realize writing the code was never what took up most of your time.

More Code, Same System

This leads us to one of the biggest consequences of AI adoption today: engineers are now producing more code than ever, but the system itself isn’t prepared for that level of throughput.

And the issues only compound: PRs pile up in review, context switching increases, more coordination is needed. All that translates to more things sitting, waiting to go through an already-strained pipeline.

Imagine having ten open PRs waiting for review while you're being asked to make changes on two others. On top of that, you need to resolve conflicts with the main branch. You're not writing more code — you're coordinating and continuously debugging in an exhausting, neverending process.

What the Fastest Teams Have in Common

When we look at the teams that are actually improving their cycle time significantly, the pattern is clear. These are not teams that simply use more AI; they're teams that already had a well-oiled engineering system. They had already implemented fast pipelines, small PRs, quick code review cycles, constant deploys, and low WIP.

In that kind of environment, AI becomes a tremendous multiplier. If writing code was a meaningful part of the cycle, then speeding that up condenses the whole cycle. Throughput increases and the team can iterate much faster with real users.

But when the overarching system is slow, AI doesn't accelerate anything. It just produces more code that ends up waiting somewhere.

Writing Code Was Never the Issue

AI is uncovering something that took years for us to see: the main issue in software development was never how quickly we could write code. The real problem laid elsewhere: coordination, decisions, slow pipelines, clunky processes that were preserved over bygone cycles.

AI doesn't change that. All it did was radically optimize one part of the system that, for many teams, was never the bottleneck.

And that's creating a very clear divergence. Teams that already had optimized engineering systems are becoming dramatically faster. Teams that didn't are simply producing more code but not generating more value because their system wasn’t built to weather the impact.

That's the true story of this moment: AI didn't make software development faster. It only shed light on how inefficient our existing systems always were – especially systems that effectively consisted of one single person, which we’ll explore next.

Everything you need to unlock engineering excellence

Everything you need to unlock engineering excellence