How Old Innovation Frameworks Are Quietly Learning a New AI Trick

Innovation has a habit of making frameworks look outdated and then, if you wait long enough, making them look wise again.

Despite all the noise around AI, something quietly persistent is true: the familiar tools many teams grew up with — Three Horizons, design thinking, Lean Startup — are still doing useful work. The interesting change is not that these frameworks have been replaced, but that they’re being rewired for a world where AI shows up almost everywhere.


The Classics Haven’t Left the Building

Take the Three Horizons model.

At its simplest, it’s still a helpful way to separate today’s core business (H1), adjacent bets (H2), and more disruptive explorations (H3). It gives leadership teams a shared language for answering the awkward questions: “Are we only optimising the core?” “Are we over‑indexing on distant bets?” That structure hasn’t become less relevant just because AI entered the room.

Design thinking, too, continues to shape how teams frame problems, empathise with users, prototype, and iterate. And Lean Startup principles still give product teams a discipline for testing assumptions quickly instead of falling in love with the slide deck. The pattern worth noting is that these frameworks haven’t been thrown out — they’re being pointed at new kinds of problems.


What Changes When AI Enters the Framework

The shift is less “new framework” and more “new content running through the old pipes.”

In a Three Horizons conversation, AI now appears in every bucket. In H1, it’s about automating back‑office workflows, augmenting frontline teams, and squeezing efficiency out of existing processes. In H2, it’s about new AI‑enabled products or services that sit adjacent to the core. In H3, it becomes part of the question: what would we build if we assumed AI capabilities as a given, not a bolt‑on?

Design thinking sessions start to look different when generative AI joins the room. Teams use it to generate more divergent concepts, to prototype content or interfaces faster, to explore edge cases they might not have thought of on their own. The framework remains human‑centred; AI just becomes another collaborator in exploring the space.

Lean‑style experimentation also changes flavour. Build–measure–learn loops increasingly include questions like “what happens when we add an AI agent into this step?” or “how does behaviour change when we personalise using a model rather than simple rules?” The discipline of testing assumptions stays the same. The assumptions being tested evolve.


How Leading Teams Are Rewiring Their Playbooks

In organisations leaning into this, AI is not treated as a separate “track” running alongside innovation — it is woven into the frameworks themselves.

Three patterns keep showing up:

  • AI is used to accelerate ideation — generating options, variations, and provocations that human teams can refine.
  • AI is explored as an execution layer — agentic systems taking on repeatable tasks, orchestrating workflows, or acting as copilots in complex journeys.
  • AI is treated as a strategic variable — something that changes cost curves, customer expectations, and competitive boundaries, rather than just a feature on a roadmap.

The conversation worth having in leadership rooms is less “should we adopt a new AI framework?” and more “how do our existing frameworks need to stretch to make AI a first‑class citizen?”


Why the Discipline Still Matters

It’s tempting in an AI cycle to throw away anything that feels “pre‑AI” and start again. The risk is that you also throw away the discipline that kept earlier innovation efforts coherent.

The enduring value of these frameworks wasn’t in the jargon. It was in what they forced teams to do: think across time horizons, stay close to users, test assumptions quickly, separate wishful thinking from validated learning. Those muscles are even more important when a new technology makes almost anything sound plausible for a while.

The lens worth applying is simple: AI doesn’t remove the need for structure; it increases it. The organisations that seem to be navigating this moment well aren’t the ones with the newest canvas. They’re the ones that quietly updated the legend on the frameworks they already had.


A Question for Anyone Building With AI

There’s a small but telling moment in many enterprise conversations now: someone asks, “where does this AI initiative actually sit in our horizons, or in our innovation portfolio?” The teams who can answer that crisply tend to have an easier time getting buy‑in.

The frameworks haven’t gone away. They’ve just been asked to speak a new language.

When you look at your own AI efforts, which box do they actually live in today — core optimisation, adjacent expansion, or truly new horizon bets?

Let’s keep learning — together.

Share your thoughts

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a website or blog at WordPress.com

Up ↑