The Next Golden Path Is for Agents

Platforms have always served two kinds of users. The organization standardizes most teams onto a managed path and trusts a few to work directly with the primitives. Agents are a third user that fits neither path, and the next golden path has to be designed for them.

I recently saw an Alberta company selling new diesel tractors without emissions electronics or software-locked diagnostics, for about half the price of a similar John Deere. The appeal is simple. A farmer who lives forty miles from the nearest dealer wants a machine they can fix themselves, not one that needs remote diagnostics.

The point is segmentation. John Deere may be protecting a lucrative service lifecycle, but the product direction still reveals a real user split. Some customers want complexity handled for them. Others need enough of the machinery exposed that they can repair it when something breaks. Both are valid depending on the use case.

Enterprise platforms have the same split, and AI is about to make it harder to ignore. Platforms have always segmented users by responsibility: teams that should not own the underlying mechanisms, and teams trusted to work directly with the primitives. Now, a third kind of user has appeared that fits neither path.

That user is the agent. The next platform user is software executing a task on someone's behalf, with no defined responsibility model behind it. Agents require a path designed for scoped context, temporary authority, typed state, and evidence captured while the work happens. Most platform organizations have not drawn that line yet.

How platforms accumulate layers

Platforms usually get more complex over time as they try to handle more use cases. For example, post-incident reviews often end up in adding controls or guardrails. Over time, the platform absorbs problems the organization has not addressed elsewhere, often because those problems are too difficult to fix at the source.

Each decision may make the platform safer in isolation, but it also adds overhead. Over time, safety and complexity start to create fragile layers. That trade-off only works if teams want the platform to absorb more of the work. While some do, high-performing teams often prefer a simpler platform that lets them debug issues independently and use support tickets only when necessary.

Even before agents, the result was a platform that treated very different teams like the same user because they needed a "golden path" that would help standardize across teams the best they could.

Some teams rely on the platform because owning the underlying mechanisms is not their job. For them, tuning those mechanisms slows delivery and can increase security risk, so the platform needs to provide telemetry, recovery, security, evidence, and safe defaults. Other teams work closer to the performance edge. They need to remove generalized layers and tune primitives for systems with very specific needs, where extra abstraction gets in the way.

The portal looks done, delivery still hurts

Portals can create a false sense of confidence by measuring whether teams use the managed path, rather than whether the path actually relieves teams of their workload.

Last quarter, I visited an insurance company where the platform team had rolled out a new internal portal and shared impressive adoption numbers. When I talked to the release engineers on three of their critical systems, I heard a different story. They were still manually editing generated CI pipelines because the templates didn't support their test setup, and change management tickets were still being filled out by hand and pushed to ServiceNow because the automated evidence bundle missed the audit fields their regulator cared about. The portal existed, but the useful parts of the platform could not be recombined in a way that matched how those systems actually shipped, so the work stayed with the teams.

One test I like is simple: when the release gets weird, who owns the work? If the app team still has to stitch the exception together by hand, the platform has not really absorbed the work.

The alternative approach has its own risks. Expert teams may work directly with primitives and initially move faster, but speed alone does not ensure effective governance. I have observed teams managing their own pipelines and observability stacks until an incident or audit required a comprehensive record of changes, approvals, and evidence. While the data existed, it was dispersed across tools and could not be reviewed end-to-end.

Agents hit the same gap with less to fall back on. Operating a system is not the same as governing it. Human experts have always covered that gap with judgment and institutional memory, especially when the platform did not capture the full story. When the work moves to an agent, that fallback does not exist and it creates high risk.

Usually, the analysis stops at these two examples and lands on a familiar conclusion: segment teams by operational maturity, and accept that some teams will own more of the platform locally than leaders would prefer.

I now think that is the wrong axis. The important question is whether the work is being done by a human team or by agents acting for a specific task.

The platform's newest user

Debates about golden paths versus custom pipelines in enterprise software are fundamentally about which of these two user groups the platform should serve. This discussion assumes these are the only groups worth designing for.

That assumption is starting to change. Some of the work now lands with agents, and they need a path of their own.

Step through the three platform user models to see what each path has to expose. Use arrow keys to navigate.

Agents do not align well with the managed path, which is designed to abstract away the underlying mechanisms. Agents require these mechanisms to be exposed as explicit, machine-readable contracts, such as declarative policies and typed interfaces that humans previously accepted as compliance requirements. Without these contracts, agents are forced to improvise across context and authority boundaries the platform never made explicit.

Agents also differ from primitives-path users. The primitives path assumes a senior engineer can bring judgment and accountability to the decision. An agent may reason over the inputs, but it cannot own the responsibility model the path quietly depends on. Allowing agents direct access to primitive infrastructure with minimal compliance leads to the same failures as before, only more rapidly. Standing credentials hand the agent excessive context, and there is no record of what the agent actually did with it.

This connects to my recent work. Auth was built to limit context looked at the mismatch between agents and the authorization model. Ungoverned context supply chain risk explored how agents don't fit with provenance models. The same pattern shows up in platform user design: agents don't fit the operator model either.

What agents need from the platform

Once agents become platform users, the old segmentation breaks.

An agent-native golden path is a path designed around the contracts an agent actually needs. The contracts that matter most have to be built into the path rather than bolted on later.

Scoped context through a gateway. Agents should not receive direct OAuth grants to services like Drive, Jira, GitHub, or identity providers. Instead, they receive access to a gateway that holds credentials and enforces workflow-specific policies. The gateway also manages classification and filtering, providing agents with a brokered view of the environment rather than full access.

Task-bound credentials. The OAuth grants AI tools hold today are almost always standing, while the task they support is ephemeral. That is backward; a task-bound credential is created when the agent picks up a ticket and expires when the ticket closes, with scope limited to what that ticket actually needs. The blast radius of a compromised agent shrinks to tasks open at that moment.

Typed event streams. Agents struggle to interpret unstructured logs because they must compress and summarize the information, which introduces errors. Event streams with defined schemas provide structured facts, such as deployment times and open incidents by component. The less an agent has to infer from messy logs, the fewer places it can invent the wrong story.

A live state layer. Most AI tools reference a static corpus, reading historical information rather than the system's current state. A live state layer addresses this by providing a queryable view of the system's present condition, including open incidents and active policy revisions. Agents can query this layer directly to determine if it is safe to merge or deploy.

Evidence capture by construction. Each grant issued by the gateway and every action performed by the agent should generate evidence as part of the workflow. The audit trail should be produced automatically during operations, not reconstructed after the fact.

The goal is not to add another compliance layer for agents. The same controls that limit agent access also ensure work is governable: scoped grants explain access, typed events document changes, the state layer records conditions at the time, and evidence capture creates a trustworthy record.

The primitives path for human experts remains, but its boundaries are now clearer. Previously, the distinction between standardized and fully custom primitives paths was a weak point. This distinction is easier to manage when the platform captures state, authority, and evidence as part of routine operations. Evidence records are now generated directly from the work, in a format accessible to compliance officers.

This doesn't mean the managed path disappears for teams that want everything handled for them, but it does change how things are organized. The managed path serves people who want to outsource the underlying work. The agent-native path serves tasks that agents execute on behalf of people, and the primitives path is reserved for the small number of teams working close enough to the system to handle the standard processes themselves. The segmentation has moved onto a different axis: who is actually doing the work.

Where the next layer goes

Every layer in a mature platform has a reason for being there, and the stories above show that each one was earned. The real mistake is thinking the next set of layers belongs in the same place as the last. The teams a platform organization wants to keep will do more of their work through agents, and those agents need contracts the current managed path is not built to offer. The question worth spending time on this year is what the golden path should look like when most of the work on it is not typed at a keyboard.

AI investments face the same challenge. An AI strategy cannot bypass the platform organization, as any gaps in the platform will persist and hinder agents. Pilots may succeed, but broader rollouts stall for the same reasons platform adoption does. When the platform does not absorb the work, teams still carry it, even if the dashboard says adoption is high.

The Alberta company explicitly defined its segmentation and built a product to match. Platform organizations must do the same for agents. If the managed path remains the default for all new work, AI rollouts will inherit existing gaps. The next golden path must be designed for software executing tasks on behalf of people and teams, with context, authority, state, and evidence integrated from the beginning.