There’s a familiar narrative about AI governance. It goes something like this: legal wants guardrails, compliance wants documentation, the board wants oversight โ and meanwhile the product team just wants to ship. Governance, in this telling, is the speed bump between ambition and execution.
The interesting thing is that a meaningful number of organisations have quietly stopped believing that story.
The compliance framing was always underselling it
It’s worth being honest about why governance got such a bad reputation. For a long time, the framing was primarily about avoiding fines and checking regulatory boxes. And there were plenty of boxes to check โ GDPR, sector-specific requirements, internal audit trails. The work was real, the overhead was real, and the connection to business value was, at best, indirect.
The EU AI Act has changed the stakes considerably. Since August this year, the penalty regime is live โ fines up to โฌ35 million or 7% of global annual turnover for violations of prohibited AI practices and core governance obligations. The compliance clock for high-risk AI systems is ticking toward its own deadline next year, and organisations are already watching what early enforcement signals look like.
But fines are still the wrong frame. The more interesting shift is what organisations with mature governance are discovering about what it enables.
When governance becomes infrastructure
PwC’s 2025 Responsible AI Survey found that 58% of executives say responsible AI practices improve ROI and operational efficiency. Fifty-five percent say it enhances customer experience. These aren’t compliance metrics โ they’re business outcomes. And they emerge from a specific pattern: when governance is embedded in how AI is built rather than applied after the fact, it removes friction rather than adding it.
A regulated financial services firm that embedded governance checkpoints across its AI lifecycle โ automating model validation, documentation, and compliance review โ reduced those processes by 75% while accelerating time-to-value for new AI products. The governance infrastructure that looked like overhead turned out to be the thing that allowed them to move faster than competitors still doing it manually.
The lens worth applying: governance as quality infrastructure rather than compliance overhead. Teams that know what’s permitted, what data they can use, and what approvals they need move faster than teams paralysed by ambiguity. The guardrail is also the runway.
The ownership gap is still wide
Here’s where the honest part of the picture needs space: despite the narrative shift, the execution reality remains patchy.
Diligent Institute’s mid-year GC Risk Index found that sixty percent of legal, compliance, and audit leaders now cite technology as their top risk concern โ ahead of economic factors. And yet only 29% of organisations have comprehensive AI governance plans in place. The gap between recognising the risk and actually governing for it is still significant.
The OneTrust AI-Ready Governance Report adds another layer: 71% of IT leaders believe the speed of AI adoption actively conflicts with their organisation’s ability to enforce governance. That tension is real, and it won’t resolve itself. The organisations navigating it well tend to share a structural trait: they’ve moved from governance-by-committee โ where every AI system routes through a slow central review โ to governance-by-default, where clear policies and automated controls are embedded in the development process itself.
PwC describes it as a shift from the second line governing the first, to the first line governing itself. Responsibility moves closer to where the decisions are made.
Trust as a durable asset
The competitive framing that’s starting to land in boardrooms is this: in a market where every organisation is deploying AI, trust becomes a differentiator. Enterprise procurement teams are increasingly asking not just “what does this AI do?” but “how is it governed, how is it audited, and what happens when it goes wrong?”
This connects to a thread running through several earlier posts in this series. The architecture bottleneck argument โ that AI ambitions have a ceiling set by infrastructure decisions made years ago โ applies equally to governance. The organisations that invested early in data governance, model documentation, and accountability structures find those investments compounding now. The ones that deferred it are discovering the debt is harder to repay at speed.
The same pattern appears in the agentic AI discussion. As AI agents take on more autonomous action across enterprise workflows, the governance layer that defines what they can touch, what they can decide, and what requires human sign-off becomes not just a compliance requirement but a fundamental operating condition. Agents without governance boundaries aren’t a product. They’re a liability.
From obligation to operating model
The WEF framing that resonates here is a useful one: the remedy for governance getting caricatured as red tape isn’t more procedures. It’s returning to first principles โ defining the outcomes governance exists to protect, and working backwards to the minimum mechanisms needed to achieve them.
Integrity. Accountability. Transparency. Resilience. These aren’t abstract ethics principles. They’re descriptions of what makes an AI system one that customers, regulators, and employees can actually rely on.
The organisations treating governance as a strategic asset โ rather than a compliance cost โ are building something that’s genuinely hard to replicate quickly. Trust, once earned at scale, tends to compound. And trust lost at scale, as a number of high-profile AI incidents have demonstrated, is remarkably expensive to recover.
In your organisation, has the responsible AI conversation moved into product and engineering teams yet โ or is it still largely owned by legal and compliance?
Let’s keep learning โ together.
Share your thoughts