AI Trust Is Falling While AI Use Is Rising. That Gap Is the Opportunity.

Here’s a strange little paradox to sit with. In the past year, the share of organisations using AI jumped from 55% to 78%. At the same time, public trust in AI companies quietly dropped — from 50% to 47%.

More AI. Less trust. Simultaneously.

If that feels contradictory, it is. And it might be the most interesting signal in enterprise AI right now.


The Trust Deficit Nobody Planned For

The pattern worth noting is that AI adoption and AI trust have been running on completely different tracks. Enterprises rushed to deploy. Governance frameworks lagged. AI incidents rose sharply. And users — employees, customers, regulators — noticed.

The Stanford AI Index documented this gap explicitly. AI-related incidents are rising. Fewer than 15% of businesses have completed comprehensive trustworthy AI assessments. Corporate governance implementation is consistently behind policymaking momentum. The technology moved fast. The accountability structures moved considerably slower.

This isn’t a crisis, exactly. But it’s the kind of slow-burn credibility problem that tends to become very expensive to fix once it’s fully visible.

The organisations that understood this a year or two ago — and invested in governance before it was urgent — are now sitting in a rather comfortable position.


From Compliance Obligation to Competitive Signal

There’s a useful analogy here. Think about what happened with data privacy after major regulations landed in Europe. Initially, most organisations treated compliance as a cost — something to minimise, box-check, and move past. Then a smaller number of organisations realised that demonstrating genuine privacy practices was a customer acquisition story, not just a legal one. Suddenly, “we take your data seriously” became a differentiator rather than fine print.

Responsible AI governance is following a strikingly similar arc.

The organisations now seeing the competitive payoff are the ones that didn’t wait for a regulation to force their hand. They built transparency mechanisms into their AI systems. They created accountability frameworks for automated decisions. They made AI governance a board-level concern — not just an IT compliance task. And now, when procurement teams, regulators, and enterprise buyers come asking about AI governance, those organisations have real answers rather than aspirational documentation.

As one research lead at Cornerstone put it after achieving ISO 42001 certification: responsible AI governance is “quickly becoming a prerequisite, not a differentiator, for HR technology buyers.” The window where it is a differentiator is, in other words, closing.


The Standards Are Landing

ISO 42001 — the world’s first certifiable AI management system standard — landed in late 2023 at what felt like an odd moment. GenAI hype was at its peak. Boardrooms were more interested in demos than governance frameworks. It felt a bit like arriving at a party with a filing system.

Two years on, the timing looks rather prescient.

The EU AI Act came into force, pushing responsible AI governance to the top of enterprise priorities. The US federal government issued nearly double the AI-related regulatory rules in a single year compared to the year before. AI appeared in legislative discussions across 75 national governments. The filing system turned out to be what everyone needed.

ISO 42001 adoption is now accelerating meaningfully. Some 76% of organisations in a recent compliance benchmark survey plan to pursue frameworks like it soon. KPMG achieved certification in late 2025 — among the first of the Big Four to do so. The signal being sent to audit clients, procurement teams, and regulators is deliberate: we govern AI the way we govern everything else we’re trusted with.

The framework pairs neatly with the EU AI Act — one defines what must be achieved, the other describes how to run it repeatably. For organisations operating across markets, that combination is becoming the emerging operational standard.


What “Exceeding Expectations” Actually Looks Like

A claim worth examining: the competitive advantage isn’t in meeting minimum standards, it’s in exceeding them. That’s true — but only if you’re clear about what exceeding actually means in practice.

It doesn’t mean building a more elaborate compliance document. It means building AI systems where transparent decision-making, bias mitigation, and human oversight aren’t bolted on as features — they’re designed in from the start. It means being able to explain, in plain language, why an AI system made a particular decision. It means having an audit trail that satisfies a regulator and actually makes sense to the person affected by the decision.

The organisations doing this well are treating responsible AI as a product quality question, not a legal one. Just as you wouldn’t ship software with known security vulnerabilities and call it a feature, responsible AI thinking holds that you shouldn’t deploy decision-affecting AI without documented accountability mechanisms — and expect customers to simply trust it.

The evidence suggests this posture pays off. Responsible AI practices are now directly associated with customer adoption rates, procurement decisions, and — for vendors — the ability to support premium pricing. That’s not an ethics argument dressed in commercial language. It’s a genuine observation about what enterprise buyers now look for.


The Honest Complexity

It would be easy to make this sound more settled than it is.

The regulatory environment is uneven. The US has shifted toward innovation-first mandates under the new administration. EU enforcement mechanisms are still being developed. Standardised trustworthy AI assessments remain rare among major model developers. The gap between policy aspiration and operational reality is, in many organisations, still significant.

And there’s a talent angle too. AI ethics, governance design, and algorithmic auditing are genuinely specialised skills that sit at the intersection of technical, legal, and ethical expertise. The supply of people who can do this work credibly is thin.

Which is precisely why the organisations that moved early — building governance capabilities when the demand for them was lower and the urgency less visible — are now looking at a compounding return on that investment. They have the talent. They have the processes. They have the documentation. When the regulator, the customer, or the partner comes asking, they have the answer.


The Window That’s Still Open

Here’s where things stand: governance hasn’t yet become table stakes everywhere. In many markets and sectors, genuinely trustworthy AI is still something that differentiates rather than merely qualifies. That window won’t stay open indefinitely.

The pattern running through most emerging technology standards is roughly the same: early movers build capability when it’s voluntary, the late majority scrambles when it becomes mandatory, and the laggards pay a price that’s partly financial and partly reputational. The ISO 42001 and EU AI Act combination looks like it’s following that arc.

For founders building AI products, the framing that seems most durable is treating governance as product quality — not as a sales requirement or a compliance checkbox. Products that can be audited, explained, and trusted don’t just clear procurement hurdles. They attract the customers who stay longest, the partners who go deepest, and the regulators who cause the fewest problems.

The organisations and products that win on trust tend to have decided to be trustworthy before anyone made them.


In your organisation, is AI governance being driven by compliance urgency, competitive strategy — or are those two things starting to feel like the same conversation?

Let’s keep learning — together.

Share your thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a website or blog at WordPress.com

Up ↑