Modernizing high-risk systems in the age of AI.
Already Inside
The target is not your code. It is the layer of trust your code runs on.
Founder, Def Method
In my last article, I argued that detection is losing the race. AI-generated code is entering production faster than review can keep up, and the right response is architecture that limits how far a failure can travel.
The Vercel breach, disclosed April 19, confirms that argument in a way I didn't expect. I expected a code failure. This wasn't one.
The breach had nothing to do with AI-generated bugs or missed reviews. It originated in a third-party OAuth application and moved entirely through legitimate access. And yet the containment framework predicts it exactly. Organizations aren't just writing code faster; they're authorizing and scaling applications faster than they're governing them.
The breach began in June 2024 with a compromised OAuth application belonging to a vendor. From there, the attacker moved through a chain of authorized access: vendor to Google Workspace, Workspace to internal systems, and from there into customer environment variables containing credentials for downstream systems. No exploit was required. The attacker moved through doors that were already open.
The System Failed at Containment
In a well-architected system, failures are localized, observable, and recoverable. This failure was none of those.
Localization failed because the initial compromise did not stay contained to the vendor. It propagated across multiple trust boundaries, each one justified in isolation. The system had no concept of limiting traversal across those boundaries. The trust graph was effectively flat. There was no depth. No segmentation. No boundary that forced the failure to stop.
Observability failed because the activity looked legitimate. A compromised OAuth token looks identical to a legitimate one. A compromised application is not an intruder in the traditional sense. Logs tuned to detect abnormal behavior are ineffective because this is normal behavior being misused.
Recovery failed in a more subtle way. Rotating credentials did not invalidate their use in existing deployments. Compromised secrets remained active in prior artifacts. Rotation created the appearance of recovery without actually restoring control.
These are not implementation mistakes. They are architectural ones.
The Layer Below the Code
My April piece focused on the code layer, where AI increases the volume of mistakes beyond what review can realistically catch. This breach operates one layer below that, in the systems that code depends on. OAuth authorizations, deployment platforms, identity providers. These form a network of delegated trust that most organizations do not model explicitly.
That network is the system. And like AI-generated code, it is growing faster than it is being reviewed.
Every OAuth app, every SaaS integration, every developer tool connected to your identity provider becomes a node. Most are legitimate. Most are low risk. Almost none are revisited. A compromised OAuth application is not a broken component. It is a valid component of your system operating with the permissions it was given.
The Trust Graph Is the System
What the Vercel breach makes clear is that containment has to extend beyond code boundaries. It has to include the trust graph.
Most organizations implicitly treat this graph as flat. Once something is authorized, it can move laterally across systems with few meaningful constraints. Access is granted in isolation, but exercised in combination. That is the architectural failure.
A flat trust graph allows traversal. A segmented one forces containment. The question is no longer just "if a bug slips through, how far can it travel?" Now, it's: "If something that already has access is compromised, where does it stop?" Most systems cannot answer that question, because they were never designed to.
Blast Radius Is a Design Decision
Deployment platforms make this visible. They act as centralized stores of credentials. A single project may contain dozens of environment variables. At organizational scale, that becomes hundreds or thousands of credentials, each one providing access to a different downstream system.
This is credential fan-out. Each credential extends the trust graph. Each one creates a new path. Containment asks a simple question of each path: If this credential is compromised, where does it stop? Does it expose a narrowly scoped resource or a broadly privileged system? Does it operate within constrained permissions or administrative access? Are its actions observable, or indistinguishable from normal traffic? These decisions define the blast radius. They are architectural, whether or not they are treated that way.
What This Means for Engineering Leadership
Most teams treat this as an inventory problem. What has access to our systems? OAuth applications, integrations, platform connections. Which are still in use? Which have broad permissions? Which have not been reviewed? You need the list, but inventory is only the starting point.
Containment is a design problem. If one of those entities is compromised, what can it reach? Where are the boundaries? What actually stops it from moving further? Most organizations can produce a list of what has access. Far fewer can describe the blast radius of that access. That gap is where risk accumulates in the form of ungoverned credentials and unauthorized traversal paths.
Recent supply chain attacks have targeted the same layer from different angles: CI/CD systems, package maintainers, OAuth relationships. Each one aims at the same asset: the credentials that connect systems together.
The target is not your code. It is the layer of trust your code runs on. The organizations that handle the next Vercel well will be the ones that treated their trust graph as architecture, and designed its blast radius accordingly.
Ready to modernize your Rails system?
We help teams modernize high-stakes Rails applications without disrupting their business.
If this was useful, you might enjoy Essential Complexity — a bi-weekly letter on modernizing high-risk systems in the age of AI.