There’s a joke in engineering circles about the most expensive phrase in technology: “We’ll just build it ourselves.” It usually precedes eighteen months of effort, a talent acquisition war, and a product that’s two years behind what a vendor already ships.
Enterprise AI is living through a version of this joke in real time — and the punchline is arriving faster than most people expected.
The Confident Phase
Cast your mind back to early 2023. Every serious enterprise with an AI ambition was staffing up an internal AI team. The reasoning was understandable: the technology was new enough that external solutions seemed immature, and the competitive stakes felt too high to hand the wheel to a vendor. Building internally meant control. It meant differentiation. It meant not being dependent on someone else’s roadmap.
Menlo Ventures’ 2023 State of Generative AI report captured this mood well: roughly 80% of enterprises were sourcing AI externally then, but the expectation was clearly that internal capability would grow to close the gap — and eventually dominate.
One year later, the picture had already started to shift. By late 2024, the split had moved meaningfully: 53% of AI solutions were being purchased, 47% built internally. That’s not yet a dominant lean toward buying. But the direction of travel is notable, and the underlying reasons are worth understanding — because they say something important about how enterprise AI is actually maturing.
Why Build Keeps Losing Ground
The honest explanation isn’t that enterprises gave up on internal AI capability. It’s that the definition of “build” has quietly changed.
In the early phase, building meant training models, constructing data pipelines from scratch, and engineering everything from the retrieval layer upward. It was genuinely bespoke work, requiring rare talent and substantial infrastructure investment. And for most organisations — outside of a handful of technology companies with existing ML expertise — it turned out to be much harder than the optimistic initial projections suggested.
Meanwhile, the available-to-purchase landscape matured rapidly. Code copilots reached 51% enterprise adoption in 2024 — GitHub Copilot alone hit a $300 million revenue run rate. Support chatbots at 31%, enterprise search at 28%, RAG implementations jumping from 31% to 51% year-over-year. These weren’t early-stage experiments. They were production deployments with real reliability records and measurable impact data.
The pattern worth noticing: when a category has a credible vendor solution with a six-month deployment timeline, the calculus for an 18-month internal build changes significantly. Not because building is wrong — but because the opportunity cost is real. Every month spent building infrastructure is a month not spent on the applications and workflows that actually differentiate the business.
What Internal Teams Are Actually Doing Now
Here’s where the story gets more interesting than the buy-vs-build headline suggests.
Enterprise data and AI teams haven’t shrunk. They’ve reoriented. The work that consumed them in 2023 — foundation model evaluation, pipeline construction, vector database setup — has become more standardised and more purchasable. What remains genuinely internal, and genuinely hard, is the layer above it.
Integration is one dimension. Connecting a purchased AI solution to existing enterprise systems — with their decades of accumulated data formats, access controls, and workflow quirks — is not something a vendor can do for you. It requires people who understand both the AI system and the legacy environment, and there are vanishingly few of those.
Governance is another. As explored in the AI ethics post from mid-2024, the gap between having an AI policy and actually governing AI in practice is where the hard work lives. That work is accelerating as enterprises move from a handful of pilots to dozens of production deployments across multiple purchased platforms.
The emerging picture is of internal teams becoming sophisticated assemblers and governors rather than pure builders — less like a kitchen brigade cooking from raw ingredients, more like a conductor working with a full orchestra. Different role, not lesser role.
The Modular Stack Arrives
The structural shift underneath all of this is worth naming: enterprise AI is moving toward a modular, multi-vendor architecture rather than a monolithic internal build.
The average enterprise in 2024 was running three different foundation models simultaneously, switching between them based on the task. That’s not indecision — it’s pragmatism. Closed-source models accounted for 81% of enterprise usage, but even the most committed closed-source organisation was mixing providers at the margins. OpenAI’s enterprise market share fell from 50% to 34% in a single year, with Anthropic nearly doubling to 24%. The speed of that shift suggests buying decisions are being revisited far more actively than the conventional “enterprise inertia” narrative would predict.
The implication for anyone building AI infrastructure and integration tools is straightforward: the plumbing market is real, it’s growing, and it’s increasingly specialised. Every enterprise assembling solutions from five different vendors needs tooling to make those solutions talk to each other, behave consistently, and be governed as a coherent system.
This connects to the data infrastructure thread explored in The Data Stack Finally Steps Into the Spotlight and the RAG conversation in the pattern that keeps enterprise AI honest — the plumbing layer keeps asserting its importance, even as the application layer gets most of the attention.
The Startup Opportunity in the Assembler Economy
There’s a counterintuitive angle for founders here.
The rise of the enterprise assembler model is sometimes read as bad news for startups — incumbents capturing the foundation model layer, large platforms capturing application spend. But the Menlo 2024 data tells a more nuanced story. AI-native startups actually gained meaningful share against incumbents in the application layer in 2024. Chegg lost 85% of its market cap to ChatGPT-powered alternatives. StackOverflow lost half its traffic to GitHub Copilot. When startups deliver a genuinely better AI-powered experience for a specific workflow, enterprise customers switch — faster than the conventional wisdom about enterprise sales cycles would suggest.
The lesson from the data isn’t “incumbents win everything.” It’s “incumbents win the horizontal, specialists win the vertical.” And in an assembler economy, where enterprises are deliberately maintaining flexibility across their AI stack, there’s structural space for the specialist who genuinely knows one domain deeply.
The buy-vs-build question will keep evolving as the technology matures and as internal enterprise capability grows. But the direction of the current moment is clear: the competitive advantage is shifting from who builds the most to who assembles and governs the best.
In your organisation’s AI journey, where has the build-vs-buy line landed — and is it where you expected it to be at this stage?
Let’s keep learning — together.
Share your thoughts