There’s a useful test for whether a technology has genuinely matured: when it stops being the subject of the meeting and starts being the thing that makes the meeting possible. Email passed that test so long ago that the analogy feels quaint. Cloud passed it quietly over the course of a decade. AI, in 2025, appears to have passed it with unusual speed.
Not everywhere. Not uniformly. But the direction is clear.
What the numbers are actually saying
Menlo Ventures’ State of Generative AI report, published this month, puts enterprise generative AI spend at $33 billion in 2025. More telling than the absolute number is where it went: more than half flowed into AI applications — the products and tools that sit in front of actual users in actual workflows — rather than into the underlying infrastructure. That’s a meaningful shift from prior years, when most enterprise AI spending was concentrated on foundation model access and compute.
The application layer alone now represents more than 6% of the entire software market, achieved within three years of ChatGPT’s launch. By comparison, cloud computing took roughly a decade to reach equivalent software market penetration. The compression of that timeline is the story.
The data stack discussion from earlier this year anticipated this pattern: the signal in where enterprise AI money was moving pointed toward application-layer maturity before the headline numbers caught up. That signal now has a year’s worth of confirmation behind it.
The honest side of the ledger
It would be comfortable to write a pure triumph narrative here. The numbers don’t quite allow it.
Forty-two percent of companies abandoned most of their AI initiatives in 2025 — up sharply from 17% the year before. That’s not a failure of ambition. It’s a market working through the gap between what AI can do in controlled conditions and what it actually delivers at production scale, inside real organisational constraints, with real data quality and real governance overhead.
Seventy percent to eighty-five percent of AI initiatives still fail to meet expected outcomes, according to MIT and RAND Corporation research published this year. The productivity gains are real — employees using AI consistently report 40% improvements; controlled studies show 25-55% depending on function. But the path from “this works in a pilot” to “this works across the organisation” turns out to involve more than scaling the model.
This connects to a thread running through several posts this year. The architecture bottleneck argument — that AI ambitions have a ceiling set by infrastructure decisions made years ago — proved out repeatedly in 2025. The governance discussion showed that organisations treating governance as an afterthought were the ones most likely to be in that abandonment statistic. The ceiling isn’t the AI. It’s what surrounds it.
What actually separated the leaders
The Menlo report makes an observation worth dwelling on: AI-native startups now earn nearly $2 for every $1 incumbents earn in the AI application layer. The velocity gap — the difference in how fast a small AI-native team can ship versus an incumbent managing legacy architecture, partner agreements, and organisational complexity — proved wider and more durable than most incumbents anticipated.
At the same time, the foundation model landscape shifted decisively. Anthropic’s enterprise LLM share moved from 24% last year to 40% this year. OpenAI’s enterprise share fell from 50% in 2023 to 27% today. Google gained meaningfully. The market didn’t consolidate around a single winner — it redistributed around quality signals that enterprise buyers could actually measure: accuracy, safety, reliability, and governance.
The organisations that navigated the year well tended to share a pattern: they picked fewer bets and took them further. The year AI stopped being a pilot programme post at the start of this year made a prediction — the question had shifted from “can it work?” to “how do we scale it?” What 2025 proved is that scaling it is genuinely hard, and that “scaling it” is really three separate problems: technical depth, organisational change, and governance infrastructure.
What the year didn’t settle
A few questions that remain legitimately open as the year closes.
The ROI timeline is longer than advertised. Companies that moved early report $3.70 in value for every dollar invested, with top performers achieving $10.30 returns. But most organisations achieve satisfactory ROI within two to four years — considerably longer than the seven-to-twelve month payback periods typical in other technology investments. The patience required for that gap is a cultural challenge as much as a financial one.
The talent constraint is real and not yet resolved. Sixty-eight percent of executives cite talent shortages as the primary barrier to scaling AI, per Deloitte. Workers with AI skills command a 43% wage premium. One-third of workers report receiving adequate AI upskilling. The gap between what leadership wants to deploy and what the workforce is prepared to operate hasn’t closed — it’s widened in some organisations.
And the question of what “boring” looks like for AI — raised at the start of this year — turns out to be the right frame for where we are. The novelty has genuinely faded. The serious work of embedding AI into operations, governance, and culture is underway. That work is less photogenic than a product launch, slower than a demo, and considerably more valuable.
2025 – The year AI stopped being a project. The year the actual work began.
Looking back at your own organisation’s AI journey in 2025 — what surprised you most, and what do you wish you’d done differently?
Let’s keep learning — together.
Share your thoughts