Semi-Structured Interviews: Why Most Researchers Get Them Wrong (and a Better Way to Do Them)

Semi-Structured Interviews: Why Most Researchers Get Them Wrong (and a Better Way to Do Them)

Most semi-structured interviews feel insightful—and that’s exactly the problem

I’ve seen teams walk out of interviews feeling confident, aligned, and excited—only to realize a week later that none of the insights actually hold up. The quotes sound great. The stories are compelling. But when you try to make a decision, everything falls apart.

This is the trap of semi-structured interviews: they produce convincing narratives, not necessarily reliable evidence.

And the worst part? Most researchers don’t realize they’re doing it wrong because the conversations feel so productive in the moment.

If your interviews ever lead to debates like “we heard different things” or “it depends on the user,” you’re not alone—you’re just running semi-structured interviews without enough structure to trust the output.

The real mistake: treating semi-structured interviews like casual conversations

The industry advice sounds harmless: “Keep it conversational.” But taken literally, this is where things break.

Here’s what actually happens:

  • You skip questions because the conversation is flowing
  • You probe deeply on interesting participants—but not consistently across all
  • You remember the most articulate users, not the most representative patterns

I once audited a set of 18 interviews for a growth team trying to understand trial drop-off. Every interview followed the same guide—on paper. But in practice, each interviewer improvised heavily. Some dug into onboarding friction, others into pricing perception.

When we tried to synthesize, we couldn’t answer a basic question: what actually causes drop-off?

Not because the data wasn’t there—but because it wasn’t collected consistently.

This is where most semi-structured interviews fail: not in asking questions, but in maintaining comparability while exploring depth.

Semi-structured interviews are not flexible—they’re dual-controlled

If you remember one thing, it’s this: semi-structured interviews are not “loosely structured.” They are tightly controlled in two dimensions at once.

  • Horizontal control (consistency): every participant answers core questions
  • Vertical control (depth): every key signal is probed to the same level

Most teams manage one and neglect the other. That’s why insights feel either shallow or inconsistent.

The goal is not balance—it’s enforcing both simultaneously.

A field-tested framework: how to actually run semi-structured interviews

After years of running interviews across onboarding, churn, pricing, and product discovery, I’ve settled on a system that removes guesswork without killing flexibility.

1. Define “must-learn truths” before writing questions

Most researchers start with a discussion guide. That’s backwards.

Start with 3–5 truths you must walk away with, no matter what.

Example for onboarding:

  • Where do users hesitate or second-guess themselves?
  • What expectation breaks during the first session?
  • What action signals commitment vs abandonment?

This forces clarity. If a question doesn’t map to a truth, it doesn’t belong.

2. Design anchor questions that force behavioral recall

Opinions are easy—and often misleading. Behavior is harder—but reliable.

Bad question: “What did you think of the onboarding?”
Better: “Walk me through the last time you signed up—what did you do first?”

I ran a study where users claimed onboarding was “intuitive.” But when we walked through their actual behavior, 70% hesitated at the same step for over 20 seconds. The perception and reality didn’t match.

If you don’t anchor in behavior, you’ll optimize for what users say—not what they do.

3. Standardize probing, not just questions

This is the most overlooked skill in semi-structured interviews.

Instead of relying on instinct, define probe types:

  • Sequence probe: “What happened right before that?”
  • Decision probe: “Why did you choose that option?”
  • Expectation probe: “What did you expect to happen?”
  • Failure probe: “What felt off or confusing?”

Then apply them consistently across participants.

This is where rigor comes from—not the script, but the consistency of depth.

4. Track emerging patterns in real time

Don’t wait until synthesis to notice patterns.

After every 2–3 interviews, log emerging signals:

  • Repeated friction points
  • Unexpected behaviors
  • Language users consistently use

Then deliberately test these in subsequent interviews.

I’ve seen teams miss obvious patterns simply because they treated every interview as isolated instead of iterative.

Why traditional analysis methods quietly distort your findings

Most teams still analyze interviews like this:

  1. Write interview summaries
  2. Highlight key quotes
  3. Discuss themes

This feels structured—but it’s fundamentally flawed.

It overweights:

  • Articulate participants
  • Memorable anecdotes
  • Researcher bias during note-taking

A better approach is what I call pattern-first synthesis:

  1. Break responses into atomic observations
  2. Group by similarity before labeling
  3. Count how often each pattern appears
  4. Only then interpret meaning

In a pricing study I led, our initial takeaway was “users want simpler pricing.” After pattern quantification, only 4 out of 20 users struggled with complexity. The real issue? 13 users felt they couldn’t predict costs.

That insight led to usage transparency features—not simplification. Completely different roadmap.

Where AI-native tools change the game for semi-structured interviews

The hardest part of semi-structured interviews isn’t asking questions—it’s maintaining consistency at scale.

This is where newer tools fundamentally outperform traditional setups:

  • UserCall: Built specifically for research-grade semi-structured interviews. It uses AI-moderated interviews with deep researcher controls to ensure every participant gets consistent probing while still adapting dynamically. More importantly, it lets you trigger interviews at key product moments—so you capture insight exactly when behavior happens, not days later when memory is unreliable.
  • Video call tools: Flexible but highly variable—quality depends entirely on interviewer skill
  • Survey tools: Scalable but rigid—no ability to probe or adapt

The shift here is subtle but important: you’re not just scaling interviews—you’re standardizing depth and reducing bias.

When you should NOT use semi-structured interviews

This method is powerful—but misapplied constantly.

Don’t use semi-structured interviews when:

  • You need statistically significant validation
  • The behavior is too far in the past to recall accurately
  • You already know the problem and need measurement, not exploration

I’ve seen teams run interviews to validate pricing changes when a simple experiment would have given a clearer answer in half the time.

Use interviews to discover and explain—not to confirm.

The real skill: managing signal vs story in every conversation

Great semi-structured interviews don’t feel dramatically different in the moment. The difference shows up later—in the clarity of decisions they enable.

Every interview forces tradeoffs:

  • Do you follow an interesting tangent—or maintain consistency?
  • Do you trust what users say—or dig into what they did?
  • Do you capture a great quote—or a reliable pattern?

Most researchers optimize for the wrong side of these tradeoffs without realizing it.

Once you start treating semi-structured interviews as a system—not a conversation—you stop collecting stories and start generating evidence.

And that’s when they actually become useful.

Get 10x deeper & faster insights—with AI driven qualitative analysis & interviews

👉 TRY IT NOW FREE
Junu Yang
Junu is a founder and qualitative research practitioner with 15+ years of experience in design, user research, and product strategy. He has led and supported large-scale qualitative studies across brand strategy, concept testing, and digital product development, helping teams uncover behavioral patterns, decision drivers, and unmet user needs. Before founding UserCall, Junu worked at global design firms including IDEO, Frog, and RGA, contributing to research and product design initiatives for companies whose products are used daily by millions of people. Drawing on years of hands-on interview moderation and thematic analysis, he built UserCall to solve a recurring challenge in qualitative research: how to scale depth without sacrificing rigor. The platform combines AI-moderated voice interviews with structured, researcher-controlled thematic analysis workflows. His work focuses on bridging traditional qualitative methodology with modern AI systems—ensuring speed and scale do not compromise nuance or research integrity. LinkedIn: https://www.linkedin.com/in/junetic/
Published
2026-04-01

Should you be using an AI qualitative research tool?

Do you collect or analyze qualitative research data?

Are you looking to improve your research process?

Do you want to get to actionable insights faster?

You can collect & analyze qualitative data 10x faster w/ an AI research tool

Start for free today, add your research, and get deeper & faster insights

TRY IT NOW FREE

Related Posts