CR

The Brittle Internet

March 26, 2026

We are entering a new era of the internet. Of course, propelled by AI, we are entering an era of unprecedented speed and productivity. What once took entire development teams weeks to scope, build, test, and deploy, can now be done by a competent solo developer (and a swarm of agents) in a fraction of the time. I've already written about model capabilities, how they continue to improve, and how they enable velocity in ways that we've never seen before.

But the other side of the coin is that we are producing a more brittle internet. What does this mean? The internet—a term I'm using to more broadly capture all things related to software (websites, applications, and the like)—just doesn't work like it used to. Recent examples abound, including:

  • Github's recent, neverending downtime saga [1, 2]
  • Claude's (less severe yet still noticeable) downtime [1]
  • Cloudflare's two instances of global downtime in as many weeks [1, 2]
  • Supabase's consistent downtime as best evidenced by people complaining on Reddit [1, 2, 3]

I can probably search for more examples but I think any user of developer tools—or users of the internet—have probably noticed a similar trend, wherein things are simply more brittle than they used to be.


This is a new phenomenon. In hindsight, we will look back to some set of years shortly before the AI explosion as the years of the stable internet. ("Back in my day, the internet worked.") Between 2015 and 2023, software development skyrocketed as a potential career and alongside it, best practices around code conventions, security, and maintainability.

Now, in 2026, I have a baseline expectation that the services I use will not work 100% of the time, or 100% as I expect. To be very clear, this is a new expectation that has only developed in the last year or so. This could be a coincidence, but I am nearly certain that it is inherently linked to speedier development, trusting AI agents to author more code, and less ownership/visibility into the final code output. Many developer teams, including some who work on the teams building the brittle software I mention above, boast about their token usage, some going so far as to brag about the fact that humans write no code on their team.

Beyond those developer teams, it is also true that developers are more generally using AI to author more and more code. Here is how Claude Code usage—as measured by its inclusion in commits on GitHub—has changed over the last year.

Activity chart showing Claude Code activity on Github between March 25, 2025 and March 25, 2026

What are the implications of this shift? For one, we will work faster—this is already known. And we will also break more. This might be acceptable, in smaller applications, internal tools, prototypes and experiments. But it is frustrating when the developer tools and platforms we rely on most—GitHub, Claude Code (heh), Cloudflare—face critical reliability issues. Is moving fast worth breaking things in those orgs? I don't think so.

To be fair, I want to call out another compounding factor that could be playing a part. AI-assisted engineering has led to an explosion of growth in code committed, services deployed, and applications used, which means the platforms listed above have likely faced unprecedented strain and uncovered new edge cases due simply to the new influx of users and services they have had to support. That may explain some of the brittleness. Regardless, my initial observation around ownership of the code that is written, and how many developers are simply deferring all critical thinking regarding their code to an agent, remains true. Some level of understanding of one's codebase is a prerequisite for building reliable software, and many developers are skipping that understanding entirely in order to move faster.