The Coordination Layer: Why the Biggest Companies of the AI Era Haven’t Been Built Yet
The Misallocation
AI is a genuine platform shift. On that, the market has reached consensus. But consensus on what AI is has obscured a deeper question: consensus on where the durable value actually sits.
Right now, the overwhelming majority of capital is concentrated in what I’d call the intelligence generation layer — larger models, more tokens, more compute, faster inference. That investment is necessary. Every platform shift begins with infrastructure expansion. But having invested through five technology cycles since the late 1990s — and our firm’s first AI investment in 2013 — I’ve watched this pattern resolve the same way each time: the infrastructure wave validates the shift, but it rarely settles the architecture. And it almost never captures the majority of long-term economic value.
The layer that does? The one that reorganizes economic systems around the new capability. Not workflow orchestration — connecting apps, automating triggers, moving data between systems. I mean systems that become the decision-making infrastructure of an industry, accumulating structural advantage with every transaction. I believe we are systematically underinvesting in that layer today.
The Strongest Case Against Me
Before I make the argument, let me steelman the opposition — because the counterarguments are good and dismissing them would be intellectually dishonest.
The models are becoming the coordination engines themselves. This is the strongest version of the bull case. As models gain agentic capabilities — tool use, multi-step planning, structured output — they’re not staying in the infrastructure layer. They’re reaching upward into orchestration. Why would a separate coordination layer emerge when the models themselves are learning to coordinate?
Microsoft proves the infrastructure provider CAN capture the next layer. And it has. Repeatedly. DOS to Windows to Office to Azure to Copilot — Microsoft has successfully jumped from infrastructure to coordination layer at least four times across four decades. If OpenAI or Anthropic follows the Microsoft playbook, the coordination layer thesis is weakened significantly.
You’re premature. Intelligence generation isn’t solved. We haven’t cracked reasoning, planning, or world models. The current LLM paradigm may be in its early to middle innings, not its final act. Talking about a coordination layer before the intelligence layer is settled is getting ahead of the curve — maybe by a lot.
Coordination is just workflow orchestration with better marketing. The AI ecosystem is already full of orchestration tools — LangChain, Temporal, Zapier, n8n. Most of them are middleware that will get squeezed. How is what you’re describing any different?
You’re talking your book. Any investor arguing that value sits in the next layer is conveniently describing the layer they invest in. That deserves scrutiny.
These are fair objections. Here’s where I think each one breaks down.
Why “The Models Will Do Everything” Is Probably Wrong
The agentic AI argument is real. Models are gaining coordination capabilities. I’m not disputing that. The question is whether general-purpose coordination — a model that can do everything — beats domain-specific coordination systems built for particular industries.
Every prior platform shift has answered this question the same way.
General-purpose platforms create the possibility of coordination. But the companies that capture the durable economics are the ones that build domain-specific systems with proprietary data, deeply integrated workflows, and compounding switching costs. Salesforce didn’t lose to “Microsoft can do CRM too.” Shopify didn’t lose to “Amazon can do storefronts too.” Bloomberg didn’t lose to “Google can do financial search too.” In each case, the domain-specific system that embedded into how an industry actually operated proved more durable than the general-purpose platform that could theoretically do everything.
The reason is structural. General-purpose platforms optimize for breadth. Domain coordination systems optimize for depth — and depth is what creates lock-in. When a system becomes the infrastructure through which an industry makes its critical decisions, it accumulates proprietary data, workflow dependencies, and institutional knowledge that no general-purpose model can replicate by scaling parameters.
There’s a second moat that gets less attention: data gravity. Domain coordination systems accumulate private transactional data, outcome-linked feedback loops, and contextual metadata that general-purpose models literally cannot access. Enterprise boundaries, regulatory constraints, and privacy requirements mean that the richest decision-relevant data never reaches the training sets of frontier models. The proprietary financial workflows inside an investment research platform. The vehicle condition histories flowing through an auto remarketing marketplace. The operational data from autonomous warehouse scans. None of it is available to train a general-purpose model — and it never will be. This isn’t a switching cost problem — it’s a data access problem. And it compounds over time, because every transaction through a coordination system generates more proprietary data that widens the gap.
This is not theoretical for me. We were early investors in AlphaSense, backing the company in 2016, long before the current AI wave. AlphaSense has surpassed $500M in ARR — not because AI-powered search was novel, but because it became the decision infrastructure for how institutional capital gets allocated. It sits at the intersection of proprietary financial data, analytical workflows, and human judgment. 88% of the S&P 100 uses it. Their analysts don’t just use AlphaSense as a tool — it’s the operating system for how investment decisions actually happen.
That’s coordination — not in the sense of connecting systems, but in the sense of restructuring how an entire industry’s economic decisions get made. And here’s the critical point: that position holds regardless of which underlying model architecture wins. AlphaSense has evolved its AI capabilities multiple times — from early NLP approaches to transformer-based systems to whatever comes next. The moat isn’t the model. The moat is the integration of intelligence into a specific economic process that compounds over years. The model is a component. The coordination layer is the franchise.
Could OpenAI build a financial intelligence product? Of course. But “could build” is very different from “will capture.” The history of technology is filled with general-purpose platforms that could theoretically do what specialized systems actually did — and lost anyway, because the switching costs, data moats, and workflow integration of the incumbent were too deep.
The Microsoft Problem (And Why It’s the Exception)
The Microsoft counterexample deserves a direct answer because it’s the best evidence against my thesis.
Microsoft has successfully transitioned from infrastructure to coordination layer multiple times. It’s one of the most remarkable franchise-extension stories in business history. If you believe OpenAI, Anthropic, or Google will follow the same path, then the coordination layer argument is weakened significantly.
But look at the historical precedent. How many infrastructure-dominant companies have successfully made that jump? Microsoft has. Who else?
Cisco was the most valuable company in the world in March 2000. Routers and switches. Every packet on the internet. The bull case sounded exactly like today’s AI infrastructure narrative: explosive demand, infinite scaling, whoever owns the pipes owns the era. Cisco is still a profitable business generating $50B+ in annual revenue. But it never captured the application economics above its layer. It enabled the era. It didn’t define it.
The mobile era is sharper. AT&T, Verizon, and Vodafone spent hundreds of billions building the wireless networks that made smartphones possible. They controlled the physical infrastructure. But Apple and Google captured the platform economics. The carriers became commoditized utilities competing on price while iOS and Android — the coordination layers — defined the era and extracted the majority of the profit.
IBM is perhaps the most instructive case. IBM dominated computing for decades and saw the shift to software early. It had every advantage — brand, distribution, R&D budget, enterprise relationships. And it still failed to capture the operating system layer. Microsoft did, starting from a position of relative weakness. The organizational DNA required to build coordination systems is fundamentally different from the DNA that builds infrastructure. Scale and resources aren’t sufficient.
Microsoft is real. I take it seriously as a counterexample. But one company successfully making the transition across four decades is not a strong historical precedent. For every Microsoft, there are dozens of Ciscos, Nokias, Sun Microsystems, and IBMs that dominated their infrastructure layer and never captured the next one. I’d bet the over on that ratio holding in AI.
The Two-Front War
Now to the critique I take most seriously: that I’m being premature. That we haven’t solved intelligence yet, and it’s too early to talk about coordination.
This is a fair objection. We haven’t solved reasoning. We haven’t solved planning. We don’t have robust world models. The current LLM paradigm — autoregressive token prediction — has been astonishingly productive, but there are strong arguments that it won’t be the architecture that ultimately delivers these capabilities. We may be in the early to middle innings of intelligence generation, not the final act.
But here’s why that actually strengthens the coordination thesis rather than weakening it.
If the current token-based LLM paradigm is the final architecture for intelligence, then today’s model leaders have a defensible position and a clear path to capturing value above their layer. The bull case for concentrated model investment makes sense in that scenario.
But if it isn’t — if the next breakthroughs in reasoning and planning come from fundamentally different approaches — then today’s model leaders face a two-front war. The coordination layer is emerging above them, and the architectural ground is shifting beneath them.
I believe the second scenario is more likely. Current LLMs are impressive — genuinely so. But the history of computing architectures is a history of paradigm shifts, and there's no reason to think this one is different. Mainframes didn’t evolve into PCs. PCs didn’t evolve into mobile. On-premise didn’t evolve into cloud. Each transition created new leaders because the new paradigm required fundamentally different design choices.
We’re already seeing early signals. Research into tokenless architectures, continuous reasoning systems, energy-based models, neurosymbolic approaches, and alternative compute paradigms including hybrid classical-quantum systems (where we have experience from our early investment in Quantum Circuits (QCI) which recently saw an exit to D-Wave Systems) suggests that the next phase of intelligence generation may look very different from the current one. The current LLM scaling paradigm will likely plateau, and post-LLM approaches will create the next wave of outlier opportunities.
This isn’t theoretical for us. One of our portfolio companies, Radical AI, is building what they call “scientific intelligence” — an autonomous system that designs hypotheses, runs physical experiments through a self-driving robotic lab, captures characterization data, and feeds it back into AI models in a continuous closed loop. Their platform, Antimatter, coordinates AI-driven experimental campaigns across their lab, which can create and characterize more than 25 novel alloys in a single day. The Department of Energy recently selected them as one of 24 partners for The Genesis Mission — alongside OpenAI, Anthropic, NVIDIA, Microsoft, and Google. This is what post-LLM intelligence looks like when it has to interact with atoms, not just predict text. And it requires an architecture that looks nothing like scaling transformers.
This is why the coordination layer argument matters now, not later. If you build a company whose value depends on a specific model architecture, you’re exposed to architectural disruption every time the intelligence layer shifts. But if you build a company whose value comes from being embedded in how an industry makes decisions — with proprietary data, workflow integration, and compounding switching costs — your position holds across model transitions. You can swap the intelligence engine underneath without losing the franchise above.
That’s exactly what we look for at Tribeca Venture Partners: companies that are model-agnostic at the intelligence layer but structurally irreplaceable at the coordination layer. Swap the model, keep the franchise. The workflow is the moat.
Coordination Is Not Orchestration
OK — this is the part where I need to be precise, because the terms get used interchangeably and it drives me a little crazy.
The AI ecosystem is already full of orchestration tools. LangChain orchestrates LLM calls. Temporal manages distributed workflows. Zapier and n8n connect applications and automate triggers. These are real and useful products. But they’re plumbing. They automate existing processes — take a workflow that exists and make it run faster, cheaper, more reliably. The value proposition is efficiency. They sit beside the value chain, passing data and instructions between systems.
That’s why orchestration gets commoditized. It doesn’t accumulate structural advantage over time. A better orchestration tool can always replace the current one because the switching cost is operational, not strategic. It’s a migration project, not an organizational rewiring.
What I mean by coordination is something fundamentally different: systems that restructure how economic decisions get made. Not automating existing workflows — reorganizing the economic logic itself. These systems sit inside the value chain. They become the medium through which economic activity flows, and they accumulate proprietary data, decision context, and institutional dependencies that compound with every transaction.
The difference between orchestration and coordination is the difference between a stage manager and a conductor. The stage manager ensures everyone hits their marks and the lights go on at the right time. Essential, but replaceable. The conductor shapes the interpretation, makes real-time artistic decisions, and determines the outcome of the performance. Strategic and irreplaceable.
In concrete terms: an orchestration system routes a customer service inquiry to the right AI agent. A coordination system restructures how an entire enterprise allocates its workforce between human and AI agents — deciding which tasks get delegated, at what authority level, with what fallback protocols, while accumulating data on performance, cost, and reliability that makes the system smarter and more entrenched over time.
Consider what happened in digital advertising. The raw infrastructure was ad serving — moving creative assets onto pages. That’s orchestration. But the coordination layer — real-time bidding, programmatic optimization, yield management — reorganized how hundreds of billions of dollars in advertising spend gets allocated each year. We were early investors in AppNexus, which became a core piece of that coordination infrastructure before its acquisition by AT&T in 2018. The value wasn’t in any single algorithm. It was in the system that buyers and sellers couldn’t operate without — the platform through which the economic transaction itself flowed.
The same dynamic is playing out in physical industries. An orchestration tool chains together API calls in a supply chain. A coordination system reorganizes how an entire industry sources, prices, and routes inventory — becoming the platform through which procurement decisions flow, accumulating intelligence that compounds with every transaction. We’ve seen this with ACV Auctions (ACVA), where we were early investors. The company didn’t just digitize auto auctions. It became the economic infrastructure through which an industry transacts, accumulating data on every vehicle’s condition, pricing history, and buyer behavior that makes the platform more valuable with every trade. We’re seeing the next generation of this now. Gather AI, another company we backed early, is building a Physical Intelligence platform for logistics. By autonomously capturing ground-truth inventory data inside warehouses and feeding it directly into enterprise systems, it closes the gap between digital records and physical reality. The result isn’t better reporting — it’s a re-architecture of how inventory decisions get made. As the system scales across facilities, it accumulates operational data that improves forecasting, exception handling, and capital efficiency — ultimately delivering hard ROI that compounds and becomes increasingly defensible.
There’s also a genuinely new dimension here that has no prior analog: agent governance. As enterprises deploy fleets of AI agents — each with different capabilities, authority levels, and access to systems — someone has to manage the allocation. Which agents handle which decisions? What permissions do they carry? How do you maintain context and memory across sessions? How do conflicts between agents get resolved? Who has authority to commit capital or resources?
This isn’t workflow automation. It’s closer to what an operating system does for a computer — managing resources, enforcing permissions, maintaining state, making allocation decisions. The companies that build the operating system for the agentic enterprise will occupy one of the most structurally important positions in the AI economy. And it’s a category that barely exists yet.
The “Talking Your Book” Rebuttal
Fair. Every investor’s market thesis conveniently describes their own portfolio. I’m not immune.
Here’s the honest version: We’ve been an early investor in companies that embedded intelligence into economic workflows across advertising, financial services, auto remarketing, and warehouse logistics — and in each case, the durable value came from structural position inside the industry’s decision-making infrastructure, not from the novelty of the technology.
Does that prove the thesis? No. It’s a sample size of a career, not a universal law. But the pattern is consistent enough across my experience, and across every platform shift I’ve studied, that I believe it’s signal rather than noise. The test is whether readers see the same pattern in their own industries — and based on the conversations I’m having with founders and operators building in AI, many of them do.
I’d still encourage skepticism toward any investor — including me — who claims the most important part of a market is the part they invest in. The best I can offer is a framework, the historical evidence, and the track record.
The Mispricing
Here’s where I’ll be most direct.
Across the venture landscape, many AI companies are currently priced as if commoditization won’t compress their margins, as if open-source models won’t normalize their differentiation, as if the architecture is already settled, and as if capital will remain abundant indefinitely.
The assumption that the architecture is settled is the one I’d push on hardest. There’s a credible argument that the capital requirements for frontier models create natural oligopolies with defensible moats. Maybe. But if the next breakthroughs come from different paradigms with different scaling properties, the moats that look structural today could prove architectural. “Priced to perfection” is fragility disguised as confidence — and pricing in architectural permanence during a period of active architectural experimentation is a very specific kind of fragility.
The market you raise in is rarely the market you exit in.
I have conviction on this because I’ve lived and am actively investing in the inverse. In 2021, when the market was at peak euphoria, we prioritized returning capital to our investors rather than deploying into overpriced rounds. When others were buying, we were selling. That looked contrarian then. It looks like cycle discipline now — earned by navigating the 2000 bust, the 2008 crisis, and every correction in between. The fun part about cycle discipline is that nobody calls it that while you're doing it. They call it "missing the market." You get a lot of polite nods on Zoom. We waited and continued to be patient finding AI investments with genuine differentiation — specifically, companies with proprietary data moats, vertical domain expertise, and post-LLM architectural approaches — and began deploying our highest-conviction AI capital starting in 2024.
This doesn’t mean today’s AI leaders will fail. Some will absolutely transition into enduring franchises — and the natural oligopoly argument has real merit for a small number of frontier providers. But the historical precedent across every prior platform shift suggests that many of the companies commanding the highest valuations during the infrastructure phase will not be the defining companies of the mature era.
My Prediction
The largest companies of the AI era will likely not be the ones generating the most tokens.
They will be the ones that become structurally load-bearing inside industries. The ones that coordinate distributed intelligence across enterprises — not by connecting APIs, but by restructuring how economic decisions get made. That embed AI into capital flows, physical systems, and autonomous operations. That build the governance and allocation layer for the agentic economy. That manage the transition from tools to agents to semi-autonomous economic actors.
And they’ll be model-agnostic — designed to survive and compound across whatever architectural transitions come next. Because transitions will come.
Some of today’s model companies will make the leap. They have the resources, the talent, and the ambition. But the history of technology says the conversion rate for infrastructure incumbents moving into coordination is low — not because they can’t, but because the organizational DNA required is fundamentally different. Building better intelligence and reorganizing economic systems around intelligence are two different disciplines. Very few companies master both.
The internet didn’t reward whoever moved the most packets. It rewarded whoever controlled commerce on top of the packets.
AI will follow the same logic. Not perfectly. Not uniformly. But directionally.
The Line That Matters
The biggest risk right now isn’t missing AI.
It’s mistaking technological inevitability for architectural finality.
They are not the same thing.
The market you raise in is a moment. Architecture is a decade.
The firms and founders that win this era won’t be the ones that chased the most narrative velocity. They’ll be the ones with the pattern recognition to distinguish between what’s exciting and what’s structural — and the discipline to invest only in the latter.
That’s the bet we’ve been making since 2011. And it’s the bet we’re doubling down on now.
If you’re a founder building tokenless AI architectures, post-LLM reasoning systems, or coordination infrastructure for the agentic economy — I’d genuinely like to hear from you. These are the areas we think will define the next decade, and we’re actively backing the people building them. Subscribe and hit reply to this email, or find me on [LinkedIn](https://www.linkedin.com/in/brianhirsch/).
---
Brian Hirsch is co-founder and Partner of Tribeca Venture Partners, a New York-based venture firm investing at the intersection of technology and economic infrastructure since 2011.

