Engineering Leadership  ·  Executive Briefing

We've been

here before.

The cloud transition cost enterprises billions in wasted spend and a decade of false starts before the industry converged on the model that worked. The same inflection point is happening now with AI-native development. The difference is: this time, we know what the right model looks like — and the window to adopt it is measured in months, not years.

The CCoE Precedent → The Same Pattern → The Signal → The Structure → The ACoE → The Transition →
The Cloud Migration

It took a decade to learn
what not to do.

When cloud computing arrived, every enterprise faced the same question: do we retrain our existing infrastructure teams, or do we build something new? Most chose retraining. Most failed. The ones that succeeded built a Cloud Centre of Excellence (CCoE) — a dedicated, structurally independent team that established the patterns before the wider organisation adopted them.

2006 – 2010  ·  Denial
“It's just someone else's data centre.”
AWS launched EC2 in 2006. Most enterprise IT leadership dismissed it as unsuitable for serious workloads. Traditional infrastructure teams — the engineers who physically racked servers and managed data centres — were told to “look into it.” No dedicated budget. No dedicated headcount. No structural change. The prevailing assumption: our existing people can figure this out.
2010 – 2014  ·  Lift-and-Shift Damage
The most expensive way to learn nothing.
Enterprises began migrating — by taking their existing on-premises architectures and moving them directly into cloud VMs. Traditional infrastructure engineers built the cloud exactly the way they built data centres: monolithic servers, manual provisioning, no auto-scaling, no infrastructure-as-code. The result was predictable and devastating.
70%
of early cloud migrations exceeded their budgets. Companies spent more on cloud than on-premises because they weren't using it natively.
5–10yr
before the industry converged on the CCoE model. Many companies repatriated workloads back to on-prem, concluding “cloud is too expensive” — when the real problem was they didn't know how to use it.
2014 – 2018  ·  The CCoE Emerges
A dedicated team. Not a retrained one.
AWS, Google Cloud, and Microsoft began publishing Cloud Adoption Frameworks that explicitly recommended a CCoE — a dedicated, multi-disciplinary team of top talent that operated outside traditional IT. Their job: build the “Golden Path” — secure, scalable patterns for using the cloud correctly. Once those patterns were established, they were handed to the wider organisation to adopt. Companies like Capital One, Netflix, and GE became the reference cases for doing it right.
2018 – 2022  ·  Mainstream Acceptance
The evidence was overwhelming.
Companies that built CCoEs migrated faster, spent less, and had fewer security incidents. The ones that didn't had accumulated years of “cloud debt” — poorly architected services, no governance, spiralling costs, and security vulnerabilities. By 2020, the CCoE was the default recommendation from every major consultancy and cloud provider. The debate was over.

The damage that preceded acceptance

Cost

Cloud bills 2–3× budget because nobody optimised for cloud-native patterns. Always-on VMs instead of auto-scaling. No FinOps discipline. No right-sizing.

Security

Traditional engineers didn't understand IAM, shared responsibility models, or cloud-native security postures. Public S3 buckets. Exposed credentials. Data breaches that forced executive attention.

Delivery

12–18 month migration projects that delivered nothing usable. The team didn't have the skills. The architecture was wrong. The timeline was a fiction.

Talent

The best engineers left for companies that were doing cloud properly. The company was left with the people least equipped to fix the mess — and a reputation that made hiring harder.

The universal mistake was the same every time: assuming that existing teams, with existing skills and existing processes, could adopt a fundamentally new paradigm by simply being told to do so. It did not work for cloud. It will not work for AI.

The AI Inflection

The same pattern.
Compressed.

The cloud transition and the AI transition share an identical structural shape. The technology is different. The organisational mistake is the same. The only difference is speed — what took cloud a decade is happening with AI in months.

Cloud Transition
AI Transition
The old team
Infrastructure engineers who built physical servers and managed data centres
The old team
Traditional software engineers who write deterministic code in structured frameworks
The new paradigm
Cloud computing — infrastructure as code, auto-scaling, serverless, API-first architecture
The new paradigm
Agentic development — AI-directed workflows, parallel agent streams, conversation as the development record
The instinct
“Our server engineers can learn cloud. Just give them training.”
The instinct
“Our engineering team can learn AI development. Just give them the tools.”
What actually happened
Lift-and-shift. Cloud used as an expensive data centre. Costs ballooned. Migrations failed. 5–10 years lost.
What will happen
AI treated as “just another API.” Existing processes wrapped around new tools. The structural advantages are missed entirely.
What worked
A dedicated Cloud Centre of Excellence (CCoE) that built the golden path first, then transitioned the wider org.
What will work
An Agentic Centre of Excellence (ACoE) that proves the model, builds the reference implementation, and creates the playbook the wider engineering team adopts.

The risk is not that the engineering team cannot learn AI development. They can — given time, reference implementations, and proven patterns to follow. The risk is that without those things, they will build AI the way they build traditional software: treating agents as API endpoints, wrapping existing processes around new tools, and missing the structural advantages that make the paradigm worth adopting.

The compression factor

Cloud infrastructure evolved on annual release cycles. Enterprises had years to recover from early mistakes. AI models and tooling evolve on weekly cycles. A new model release can obsolete an architectural decision overnight. The competitive window for organisations that move first is narrower than it was with cloud — and the cost of moving slowly is not just inefficiency. It is irrelevance.

The Evidence

The current model
has a ceiling.

Before asking what needs to change, it is worth understanding what the current structure actually produces. The data is unambiguous: the vast majority of an engineering team's time is consumed by structure, not by building.

52
min / day
Median active coding time per developer
Measured across 250,000+ developers. An 8-hour day produces roughly 11% active coding time. The remaining 89% is meetings, coordination, context-switching, and ticket overhead — not laziness, but structure.
coordination · ceremonies · context-switching · ticket authoring
~11% productive coding  ·  ~89% structural overhead

The agentic model changes the ratio

Traditional developer
1 stream · ~11% of the day productive · no parallel execution
0.11×
52 minutes of productive coding per 8-hour day
Agentic developer
3 parallel streams · ~75% per stream productive · 25% oversight
2.25×
3 streams × 75% = 225% effective productive output per day
Traditional baseline → Agentic target
~20×
productive throughput per developer — achieved by structure, not effort

The 52-minute median is not a criticism of developers — it is a description of the structure they work inside. The overhead is not waste. In a traditional team it is structurally necessary: coordination, alignment, context transfer between people. Remove those structural conditions — through domain ownership, parallel agent streams, and conversation as the record — and the overhead disappears with them. The number is not achieved by asking developers to work harder. It is achieved by changing the structure they work within.

The Problem

Adding people is not
the same as adding capacity.

The instinct when velocity is low is to hire more engineers. But every new person adds communication channels — not just one, but N new connections to every existing team member. At a certain team size, the majority of effort is spent on synchronisation, not on building.

Communication channels = N(N−1) ÷ 2
3 engineers3 channels6 engineers15 channels12 engineers6666 channels

Why ceremonies exist — and what replaces them

Every ceremony in a traditional engineering team exists to manage communication between people with overlapping context. Remove the overlap, and the ceremony loses its reason to exist.

CeremonyExists becauseIn the agentic model
Daily standupNo shared context — each developer holds a different mental model of the system stateDomain ownership eliminates overlap. Context lives in the conversation log, not in someone's head.
Sprint planningWork must be allocated manually across people with overlapping capabilitiesDomain ownership determines routing. The developer self-selects from their domain.
RefinementComplexity is unclear because no one has explored the implementationParallel probing surfaces complexity directly. The proof is in the conversation.
RetrospectiveProcess friction is invisible until explicitly surfaced in a meetingFriction is visible in LOC trends, conversation logs, and domain boundary pressure — directly observable.
Ticket authoringRequirements must be translated into structured artefacts before work beginsThe conversation context is the ticket. It already contains the requirement, reasoning, and constraints.

This is not an argument against process. It is an argument that the ceremonies of traditional software engineering were designed to solve a specific structural problem — coordinating many people across many communication channels. Reduce the channels through domain ownership and conversation-as-record, and those ceremonies become overhead without a purpose. The functions they performed still exist. They are performed better, and automatically, by the structure itself.

Same 5 streams — two topologies
TRADITIONAL TEAMAGENTIC DEVELOPERD1D2D3D4D510 coordination channelsDEVA1A2A3A4A50 coordination channels

Left: every developer is a channel for every other — coordination is permanent overhead. Right: one developer, five agent streams — zero inter-stream coordination.

The Model

The Agentic Centre
of Excellence.

Just as the CCoE was the organisational model that made cloud adoption work, the ACoE is the model that makes the transition to agentic development work. A dedicated, structurally independent team that builds the golden path — the proven patterns, the reference implementations, the operational playbook — before the wider engineering organisation adopts it.

Build

Prove the model

Build the first product using agentic development. Create the reference implementation that demonstrates the model works — not in theory, but in shipped, production-quality software.

Codify

Establish the golden path

Document the patterns, the domain ownership model, the conversation discipline, the testing standards. Create the playbook the wider team will follow — not from theory, but from what actually worked.

Transfer

Bring teams across

Developers enter the ACoE structure and adopt its norms. They learn by doing — working on real problems, inside a working model — not by attending a training course.

Why it must be structurally independent

The instinct is to incubate the agentic model inside the existing engineering team — a small group that proves the concept before wider adoption. It is the safest-sounding approach. It is also the one most likely to fail. Not because the people are wrong, but because the structure is.

Existing teams are containers. They have their own processes, measurement frameworks, reporting lines, and cultural norms — all of which were built for the traditional model. A new team inside that container does not escape the gravity of those structures. It just experiences the pull more slowly — until a deadline, a resourcing decision, or a reporting conversation bends it back into the shape the container expects.

New team inside old org
Independent ACoE
Sprint ceremonies pull the new team back in. Work must be represented as tickets to fit the upstream reporting structure. The agentic model becomes a delivery method for Jira tickets.The ACoE sets the norms. Incoming developers enter an environment where the discipline is already established and the old habits have nowhere to take root.
The existing org borrows people from the new team when deadlines slip elsewhere. Domain ownership fragments. The parallel model collapses back into coordination overhead.New developers are onboarded into the conversation record. They learn by reading existing sessions, picking up a bounded domain, and producing output under the existing discipline.
The new team is measured on the same metrics as the old team. Velocity that looks different, ceremonies that are visibly absent — all of this attracts pressure to conform.The ACoE defines its own measurement baseline. Productive LOC, delivery pace, and production quality are established as the standard from day one.
Senior traditional developers mentor the new team in old practices. Expertise flows in the wrong direction.Traditional developers who join the ACoE see the model working before they are asked to commit to it. Conviction follows evidence.
The correct direction of travel
WHAT FAILSEXISTING ORGnew teamWHAT WORKSACoEDDDD

Existing processes close around the new team. The org's gravity shapes everything inside it.

The ACoE is the core. Developers enter its structure — and adopt its norms.

The right question is not “how do we retrain our existing team?” It is: “how do we build the thing we want, and invite people into it?” Start small. Keep the team structurally independent. Protect domain ownership, conversation discipline, and measurement standards from the gravity of the existing org. Then introduce traditional developers one at a time — into the new structure, on the new terms.

The Roadmap

Two parallel tracks.
Neither can wait.

The transition to agentic development is not a single initiative. It is two parallel tracks managed in concert: converting developers to the new model, and expanding the product surface to absorb their increased capacity. One without the other fails.

Track 1

Developer Conversion

Phase 1
Early adopters
2–3 developers explore tooling and establish working patterns inside the ACoE.
Phase 2
Tooling & practice
Conversation capture, domain split, and new review processes standardised across the ACoE.
Phase 3
Team-wide adoption
Early adopters pull others across. The process exists. Onboarding is fast. Evidence is visible.
Phase 4
Fluency at scale
Individual developers covering the surface area of traditional teams. The model is proven and repeatable.
Managed gap — conversion must not outpace expansion
Track 2

Product Surface Expansion

Phase 1
Surface audit
Map what the existing team can cover. Identify adjacent opportunities that increased capacity could address.
Phase 2
Innovation active
A small, rapid function scoping new product territory ahead of delivery capacity.
Phase 3
Adjacent expansion
New product lines ready for teams as conversion delivers increased capacity. Capacity has somewhere to go.
Phase 4
Full portfolio
The business offers substantially more than before — built by the same team, operating at fundamentally higher capacity.
Capacity and product must grow together
OutputTimeUNMANAGEDCapacityProductsurplusMANAGED EXPANSIONCapacityProduct

Capacity grows. Product stays flat. Surplus capacity has nowhere to go.

An innovation function expands product scope in parallel. New capacity is absorbed by new opportunities.

The destination
The same team. Substantially more product.
Substantially more fulfilling work.

The developers who enter the ACoE and succeed will carry that experience back into every future context they work in. The model propagates not by mandating it from the top down, but by making it demonstrably better to work inside than outside — and letting the people who have experienced it become its advocates.

The CCoE took a decade to become the standard. It cost enterprises billions in wasted migrations, failed projects, and lost talent before the evidence was overwhelming enough to end the debate. The ACoE is the same model, informed by the same lessons, applied to the next paradigm shift. The question is not whether this transition will happen. It is whether it happens with a plan — or with a decade of damage first.