
I’ve seen teams walk out of interviews feeling confident, aligned, and excited—only to realize a week later that none of the insights actually hold up. The quotes sound great. The stories are compelling. But when you try to make a decision, everything falls apart.
This is the trap of semi-structured interviews: they produce convincing narratives, not necessarily reliable evidence.
And the worst part? Most researchers don’t realize they’re doing it wrong because the conversations feel so productive in the moment.
If your interviews ever lead to debates like “we heard different things” or “it depends on the user,” you’re not alone—you’re just running semi-structured interviews without enough structure to trust the output.
The industry advice sounds harmless: “Keep it conversational.” But taken literally, this is where things break.
Here’s what actually happens:
I once audited a set of 18 interviews for a growth team trying to understand trial drop-off. Every interview followed the same guide—on paper. But in practice, each interviewer improvised heavily. Some dug into onboarding friction, others into pricing perception.
When we tried to synthesize, we couldn’t answer a basic question: what actually causes drop-off?
Not because the data wasn’t there—but because it wasn’t collected consistently.
This is where most semi-structured interviews fail: not in asking questions, but in maintaining comparability while exploring depth.
If you remember one thing, it’s this: semi-structured interviews are not “loosely structured.” They are tightly controlled in two dimensions at once.
Most teams manage one and neglect the other. That’s why insights feel either shallow or inconsistent.
The goal is not balance—it’s enforcing both simultaneously.
After years of running interviews across onboarding, churn, pricing, and product discovery, I’ve settled on a system that removes guesswork without killing flexibility.
Most researchers start with a discussion guide. That’s backwards.
Start with 3–5 truths you must walk away with, no matter what.
Example for onboarding:
This forces clarity. If a question doesn’t map to a truth, it doesn’t belong.
Opinions are easy—and often misleading. Behavior is harder—but reliable.
Bad question: “What did you think of the onboarding?”
Better: “Walk me through the last time you signed up—what did you do first?”
I ran a study where users claimed onboarding was “intuitive.” But when we walked through their actual behavior, 70% hesitated at the same step for over 20 seconds. The perception and reality didn’t match.
If you don’t anchor in behavior, you’ll optimize for what users say—not what they do.
This is the most overlooked skill in semi-structured interviews.
Instead of relying on instinct, define probe types:
Then apply them consistently across participants.
This is where rigor comes from—not the script, but the consistency of depth.
Don’t wait until synthesis to notice patterns.
After every 2–3 interviews, log emerging signals:
Then deliberately test these in subsequent interviews.
I’ve seen teams miss obvious patterns simply because they treated every interview as isolated instead of iterative.
Most teams still analyze interviews like this:
This feels structured—but it’s fundamentally flawed.
It overweights:
A better approach is what I call pattern-first synthesis:
In a pricing study I led, our initial takeaway was “users want simpler pricing.” After pattern quantification, only 4 out of 20 users struggled with complexity. The real issue? 13 users felt they couldn’t predict costs.
That insight led to usage transparency features—not simplification. Completely different roadmap.
The hardest part of semi-structured interviews isn’t asking questions—it’s maintaining consistency at scale.
This is where newer tools fundamentally outperform traditional setups:
The shift here is subtle but important: you’re not just scaling interviews—you’re standardizing depth and reducing bias.
This method is powerful—but misapplied constantly.
Don’t use semi-structured interviews when:
I’ve seen teams run interviews to validate pricing changes when a simple experiment would have given a clearer answer in half the time.
Use interviews to discover and explain—not to confirm.
Great semi-structured interviews don’t feel dramatically different in the moment. The difference shows up later—in the clarity of decisions they enable.
Every interview forces tradeoffs:
Most researchers optimize for the wrong side of these tradeoffs without realizing it.
Once you start treating semi-structured interviews as a system—not a conversation—you stop collecting stories and start generating evidence.
And that’s when they actually become useful.