The worst way to start a churn interview is to ask why the user cancelled. It sounds obvious — you want to know why they left, so you ask. But that question puts users in a defensive posture immediately. They've ended a commercial relationship. They don't want conflict. So they give you the answer that closes the loop cleanest: "price," "didn't need it," "went with something else." You get data that's technically accurate and completely useless.
The questions that surface real churn reasons are almost never the direct ones. They're narrative, counterfactual, and indirect — designed to get users talking about their experience rather than defending their decision. After running exit research programs for a range of SaaS products, I've settled on 10 questions that consistently produce answers worth acting on. Here's how to use them.
Two structural problems undermine most churn interviews before they start.
Problem 1: Asking for a verdict instead of a story. "Why did you cancel?" asks the user to summarize and judge. Summaries are simplified. Judgments are self-justifying. Neither tells you what actually happened. The question you want is one that gets them narrating their experience chronologically — because that's where the real reason lives.
Problem 2: Asking too early. Reaching out through the cancellation flow, while the user is mid-process, gets you the worst version of their answer. Wait two to three days post-cancellation. The frustration has cooled. The relief of having made a decision has settled in. They're more likely to be reflective and honest.
What follows are 10 questions organized into three phases: opening, excavation, and counterfactual. Use them in sequence. Each phase builds on what the previous one surfaces.
The goal of the opening is to get the user into story mode, not evaluation mode. You want them talking about context, not conclusions.
Question 1: "Walk me through what you were trying to accomplish when you first signed up."
This is your most important question and it goes first. It anchors the conversation in the user's original intent, not in their exit. It also reveals immediately whether they had a clear use case or were signing up speculatively — which changes everything about how you interpret what comes next.
Question 2: "What did your workflow look like before you started using the product?"
Understanding what they were doing before gives you a baseline. If their previous solution was a spreadsheet, the bar for your product was low and they still left — that's significant. If they switched from a mature competitor, you learn what they expected the product to match.
Question 3: "What happened in the first few weeks after you signed up?"
Let them narrate the early experience without prompting. Where they naturally pause, repeat back what you heard: "So after that, what did you do?" The onboarding experience almost always surfaces here, and the user will flag what felt wrong without you having to ask directly.
Once the user is in the story, these questions dig into the specific friction points and decision moments. Don't rush to this phase — let the opening run until the story reaches the point where things started to go wrong.
Question 4: "Can you describe a specific moment when things felt like they weren't working?"
"Specific moment" is doing the work here. It forces the user out of generalities and into a concrete memory. That memory is usually the actual churn origin point — the moment where confidence in the product started to erode.
Question 5: "What did you try when you hit that problem?"
This reveals whether they attempted to resolve it — and if so, what happened. Did they contact support? Did support help? Did they find a workaround? Did they just stop using the feature? Each answer maps to a different failure mode: product, support, or discoverability.
Question 6: "Was there a point where you thought about cancelling before you actually did?"
Almost always, yes. And the gap between when they first considered it and when they actually cancelled is your intervention window. Understanding what kept them around during that gap — and what finally tipped the decision — tells you where your early warning signals are.
I ran exit interviews for a B2B research tool where this question revealed that 7 of 10 churned users had mentally decided to leave 3 to 6 weeks before cancelling. They stayed because of a specific project they needed to finish. The product team had assumed churn was sudden. It wasn't — there was a multi-week window where a single well-timed outreach could have changed the outcome.
Counterfactual questions are the most valuable and the most underused. They ask users to articulate what would have had to be different — which forces them to name the actual gap rather than summarize their dissatisfaction.
Question 7: "What would have had to be true for this to have worked for you?"
This is the single most actionable question in the set. The answer is almost always a product brief. "If the integration with our CRM had worked without engineering help" or "if there had been a way to show my manager the output without giving them an account" — these are specific, buildable things.
Question 8: "What were you hoping would happen that didn't?"
Where Question 7 surfaces product gaps, this one surfaces expectation gaps. The answer often reveals a mismatch between what the acquisition messaging implied and what the product actually delivered — which is critical information for the team owning the landing page and onboarding flow.
Question 9: "What would have to change for you to consider coming back?"
Not a win-back pitch — a diagnostic. Users who say "nothing, I've moved on" are telling you this was a clean break. Users who name a specific thing are telling you they're still qualified and potentially recoverable. Both answers are useful.
Question 10: "What would you tell a colleague who asked if they should try it?"
Save this for last. It's the least defensive question in the set — they're imagining advising someone else, not justifying their own decision. The answer is usually more honest and more nuanced than anything they've said so far. It often surfaces what they genuinely valued, alongside what they genuinely found lacking.
For teams running this at scale, Usercall runs AI-moderated churn interviews using these question frameworks — with researcher controls that let you customize the question sequence, probe depth, and segmentation logic. You get structured qualitative data across dozens of churned users without scheduling individual calls.
Once you have the answers, the next step is synthesis — categorizing what you heard into structural reasons versus situational ones, and connecting those patterns to the broader churn investigation your team is running.
Related: Why Customers Leave: 12 Real Reasons · How to Investigate Customer Churn Step by Step · When to Ask Users for Feedback
Need a way to run these interviews without the scheduling overhead? Usercall automates AI-moderated exit conversations at the depth of a real researcher — triggered automatically when a cancellation event fires, so you never miss a churned user who would have talked.