The Enterprise AI Spending Signal That Most Strategies Are Missing

There’s a classic moment in any market’s development when the numbers stop being surprising and start being instructive. The raw question — is this real? — gets answered. The more interesting question — what does the pattern mean? — moves into focus.

Enterprise AI has arrived at that moment.


What the 2024 Numbers Actually Show

Menlo Ventures’ annual State of Generative AI in the Enterprise report, published in late 2024, is the kind of data that rewards slow reading. The top line is loud enough: enterprise spending on generative AI grew more than sixfold in a single year, from $2.3 billion to $13.8 billion. But the more revealing numbers are a level down.

In 2023, roughly 80% of enterprises were sourcing AI solutions from external vendors. By 2024, that split had narrowed dramatically — 47% of AI solutions were being built internally, 53% purchased externally. On paper, that looks like a story of enterprises gaining confidence in their own capabilities. In practice, it reflects something subtler: a market still working out which problems are genuinely worth building for and which are better solved by buying something already proven.

The distinction matters, and the trajectory is clear. As ready-made AI solutions demonstrate production-ready reliability, the economics of internal builds look harder to justify. The conversation isn’t really “build vs. buy” anymore — it’s “build what, exactly, and why?”


The Application Layer Wakes Up

For much of AI’s early enterprise chapter, the money went upstream. Foundation models, infrastructure, APIs — the picks-and-shovels of the AI boom captured most of the investment while the application layer remained relatively thin.

That changed in 2024. Investment in AI-native applications reached $4.6 billion — almost eight times the $600 million recorded the year before. It’s the kind of acceleration that suggests a market crossing a threshold rather than just growing linearly.

The use cases attracting real spend are instructive too. Code copilots reached 51% enterprise adoption — the clearest “killer use case” to emerge so far. Support chatbots at 31%, enterprise search at 28%. These aren’t experimental deployments. They’re workflow integrations with measurable productivity impact. The pattern worth noting: the applications winning enterprise budgets are the ones with an obvious, short feedback loop between “AI did something” and “here is what that saved us.”


Where the Consolidation Is Happening

The foundation model market is worth paying attention to for a different reason — not because of the growth, but because of the movement.

At the start of 2024, OpenAI held 50% of enterprise LLM market share. By the end of the year, that had fallen to 34%, with Anthropic nearly doubling its share to 24%. The speed of that shift is striking in a market where switching costs supposedly create stickiness. It suggests that enterprise buyers, despite their reputation for inertia, are actively evaluating and re-evaluating their model choices — and that the current pecking order is less stable than it might appear.

This connects to a broader consolidation dynamic. Seventy-two percent of enterprise decision-makers expected broader AI adoption in the near term, according to the same report. But 60% of enterprise AI investment was still coming from innovation budgets rather than permanent operational allocations. That’s a signal of genuine commitment mixed with hedging — organisations moving forward, but not yet ready to treat AI spend as fixed infrastructure cost.


The Question 2025 Will Actually Answer

Here’s the tension that the spending data surfaces, the widespread adoption and meaningful scaling are different things. Gartner estimated only around 9% of enterprises had meaningfully scaled AI by the end of 2024. The other 91% had experimented, some at significant cost.

The question 2025 will answer isn’t whether enterprises will spend more — they clearly will. It’s whether the next wave of spending reflects genuine value capture or an extension of the pilot mentality under a different name. The organisations that close that gap will share something in common: they’ll have figured out how to move AI from the innovation budget to the operating budget, which requires a very different conversation internally.

As the AI getting boring post explored — the real competitive edge is shifting from access to execution. Everyone can access the same foundation models. The differentiator is the organisational capability to deploy them reliably, measure them honestly, and improve them continuously.


The Founder’s Geometry

For anyone building in the AI application layer, the spending pattern has a useful geometry.

Startups are taking share at the application level — in 2024, the Menlo data shows AI-native startups gaining meaningfully against incumbents in nearly every functional category. But the foundation model and infrastructure layers are consolidating fast around well-capitalised players with distribution advantages that are genuinely hard to replicate.

The implication isn’t pessimism. It’s focus. The application layer rewards specificity: the teams that deeply understand one workflow, one vertical, one buying pattern — and build for that — are outcompeting those trying to be general-purpose AI platforms for everyone. Healthcare. Legal. Finance. Each of those categories is earlier in its AI adoption than the headline enterprise numbers suggest, and each has structural complexity that creates real defensibility for whoever gets there first with genuine depth.


The data is telling a cleaner story than the noise around it. AI spending is accelerating, the buy-vs-build shift is real, and the application layer is where the next wave of differentiation is being built.

The more useful question, perhaps, is which part of your organisation’s AI spend is already proving its value — and which part is still looking for a feedback loop to justify it.

What would “moving from the innovation budget to the operating budget” actually require in your context?

Let’s keep learning — together.

Share your thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a website or blog at WordPress.com

Up ↑