There’s a version of product development history where every new paradigm requires an entirely new methodology. New tech arrives, existing playbooks get thrown out, consultants sell fresh frameworks, and everyone starts over.
AI is triggering that instinct. The good news is it doesn’t have to.
The Loop Isn’t Broken
The Lean Product Process โ identify what customers actually need, build the smallest thing that tests your assumption, measure what happens, learn, repeat โ is still doing useful work.
It survived mobile. It survived SaaS. It will survive AI. Because at its core, it’s just a structured way of being wrong quickly and cheaply, which turns out to be a valuable skill in any technology era.
What changes isn’t the philosophy. It’s the specific questions you ask at each stage.
Where AI Complicates the Familiar Loop
Building a traditional software product is relatively deterministic. Same input, same output. Your test suite confirms it. Your users either click the button or don’t.
AI products are different in three specific ways that force the loop to stretch:
They improve through data, not just code. Your MVP isn’t just a thin slice of features. It’s a thin slice of features and enough data to make the model useful. Getting the data question right is often the real MVP challenge.
They behave non-deterministically. The same input can produce different outputs. Standard software testing logic โ “did it return the right answer?” โ gives way to “does it return a good-enough answer across the range of inputs it will actually see?”
They fail in unfamiliar ways. A software bug crashes visibly. An AI failure often looks like a confident wrong answer. Edge cases, distribution shifts, and model degradation are failure modes most teams haven’t had to manage before.
None of this breaks the loop. But it adds steps.
The Updated Questions Worth Asking
The Lean loop, when adapted for AI, needs a few new checkpoints:
At the Discovery stage:
Where will the training data come from, and does it reflect the actual conditions the model will face? (Many AI products fail here, before a single line of model code is written.)
At the Build stage:
What does “good enough” look like for model quality โ and who gets to decide? The answer is rarely just “high accuracy.”
At the Test stage:
What are the edge cases and failure modes that would genuinely matter to users? What happens when the model is confidently wrong?
At the Learn stage:
How do you know if a model is degrading in production, and what triggers a retraining cycle? Drift is invisible until it isn’t.
And cutting across all of these: how do you set expectations with users so that “AI sometimes gets it wrong” feels managed rather than broken? That’s a product question as much as a technical one.
What Teams Doing This Well Share
The pattern worth noting among teams building AI products effectively isn’t that they’ve invented a new methodology. It’s that they’ve kept the discipline of the lean loop while being honest about what’s different.
They don’t skip the data question to get to the model faster. They treat model evaluation as a continuous practice, not a launch gate. And they build user feedback mechanisms that can distinguish “this feature is confusing” from “this model behaviour is wrong.”
The core principles โ rapid iteration, validated learning, staying close to the customer โ remain as useful as ever. The execution layer just asks more of you.
An Honest Reflection
There is something almost comforting about the fact that a methodology born in physical manufacturing, adapted for software, is now stretching to accommodate neural networks.
Good thinking tends to be portable. The loop endures. The questions evolve.
When you think about the AI products being built around you, which step in the lean loop do you think teams most commonly skip โ and what does that cost them in production?
Let’s keep learning โ together.
Share your thoughts