5 AI Feedback Templates You Want to Steal Right Now

by
Lihong
Oct 21, 2025

AI Feedback Templates

Most teams waste hours writing feedback questions from scratch. You don't need to. These five templates are ready to copy, tweak, and launch in TheySaid today.

Each template includes what it's for, when to trigger it, the exact questions to ask (with question types and answer options), and how many AI follow-up questions to use. Pick the one that fits your current problem, adjust the details to match your product, and you're live.

Related: Tips for gathering better feedback with AI | Common questions about AI feedback

1. New User Onboarding Interview

Goal: Surface blockers to first value and speed up setup.

When to trigger: Day 3–10 after signup, or when a user hasn't hit your key activation event.

Questions:

  • Q1: What goal made you sign up? (Open-ended) — AI follow-ups: 2
  • Q2: How clear is the setup so far? (Rating scale 1–5: 1=Very unclear, 5=Crystal clear) — AI follow-ups: 1–2 if rated ≤3
  • Q3: Which step (if any) slowed you down? (Multiple choice: Importing data; Connecting integrations; Inviting teammates; Permissions; Other) — AI follow-ups: 2
  • Q4: Did you reach your "first success moment"? (Multiple choice: Yes; Partially; Not yet) — AI follow-ups: 2 if answered Partially or Not yet
  • Q5: Which integration matters most right now? (Multiple choice: Slack; Salesforce; HubSpot; Google Workspace; Zapier; Other) — AI follow-ups: 1
  • Q6: How confident are you in using core features? (Matrix rating 1–5: Create project; Share link; Analyze results; Trigger automations) — AI follow-ups: 1 per feature rated ≤3, max 3 total
  • Q7: What would have made this week easier? (Open-ended) — AI follow-ups: 2
  • Q8: How likely are you to continue? (Rating scale 0–10, NPS format) — AI follow-ups: 1–2 if rated ≤6

What to do with the results: If someone names a blocker, create a task and route it to the owner. Send targeted help content based on their specific friction point.

Why this works: This template catches users while the experience is still fresh but gives them enough time to hit meaningful obstacles. The mix of ratings and open-ended questions lets you quantify problem areas while getting the context you need to actually fix them.

2. Feature Adoption Interview

Goal: Understand why a feature sticks or doesn't, and uncover upsell signals.

When to trigger: After a user has tried Feature X at least 3 times, or abandoned it after first use.

Questions:

  • Q1: What job were you trying to accomplish with [Feature X]? (Open-ended) — AI follow-ups: 2
  • Q2: How frequently do you need this job done? (Multiple choice: Daily; Weekly; Monthly; Quarterly; Ad hoc) — AI follow-ups: 1
  • Q3: How easy was [Feature X] on first use? (Rating scale 1–5) — AI follow-ups: 1–2 if rated ≤3
  • Q4: What was the most confusing step? (Multiple choice: Finding it; Configuring; Data inputs; Interpreting results; Sharing outcomes; Other) — AI follow-ups: 2
  • Q5: What outcome did you get? (Open-ended, request concrete example) — AI follow-ups: 2
  • Q6: Compared to your previous method, how is speed/quality? (Matrix rating 1–5: Speed; Accuracy; Consistency; Collaboration) — AI follow-ups: 1 per metric rated ≤3, max 3 total
  • Q7: What's missing for this to be "must-have"? (Open-ended) — AI follow-ups: 2
  • Q8: Would any add-ons help? (Multiple choice, multi-select: Advanced analytics; Export/API; Team seats; SSO; Dedicated support; None) — AI follow-ups: 1
  • Q9: How likely are you to recommend [Feature X]? (Rating scale 0–10) — AI follow-ups: 1–2 if rated ≤6

What to do with the results: Tag themes around must-keep features and adjacent needs. If someone selects add-ons, open a CRM expansion task for your sales or CS team.

Why this works: You're catching users at two critical moments—when they're forming habits or when they've already given up. The comparison questions reveal whether your feature actually delivers value against their old workflow, which tells you if adoption problems are about UX or fundamental product-market fit.

3. Canceled Customer Interview

Goal: Diagnose real churn drivers and test potential save offers.

When to trigger: Within 30 days of account cancellation or downgrade.

Questions:

  • Q1: What was the primary reason for canceling? (Multiple choice: Budget; Low usage; Missing capability; Switching tools; Support experience; Security/compliance; Other) — AI follow-ups: 2
  • Q2: Which outcome that you hoped for went unmet? (Open-ended) — AI follow-ups: 2
  • Q3: How much value did you get relative to price? (Rating scale 1–5) — AI follow-ups: 1–2 if rated ≤3
  • Q4: Which features did you use most? (Multiple choice, multi-select: AI interviews; Forms; Pulses; Analytics; Integrations; Automations; None) — AI follow-ups: 1
  • Q5: Where did friction occur most often? (Multiple choice: Setup; Data quality; Integrations; Performance; Usability; Training; Other) — AI follow-ups: 2
  • Q6: What would have changed your decision? (Open-ended) — AI follow-ups: 2
  • Q7: If offered one of the following, would you reconsider? (Multiple choice: Training; Feature pilot; Temporary discount; Implementation help; No) — AI follow-ups: 1 if answer is not No
  • Q8: If switching, which tool and why? (Open-ended) — AI follow-ups: 2
  • Q9: Would you be open to a future check-in? (Multiple choice: Yes in 30 days; Yes in 90 days; No) — AI follow-ups: 0–1

What to do with the results: Tag the churn reason in your CRM. Trigger a save play if appropriate, or archive the account with a win-back reminder for later.

Why this works: People are more honest after they've already canceled because there's no pressure to be polite. The save offer question (Q7) tests whether the problem was fixable or fundamental, which tells you where to invest retention resources.

4. Win/Loss Buyer Interview

Goal: Capture honest decision criteria to sharpen your product and sales approach.

When to trigger: 7–21 days after a deal closes (won or lost).

Questions:

  • Q1: What problem did you want to solve when you initially spoke to us? (Open-ended) — AI follow-ups: 1
  • Q2: How critical was the problem to your business? (Rating scale 1–5: 1=Not critical, 5=Very critical) — AI follow-ups: 1
  • Q3: Which criteria mattered most? (Ranking: Capabilities; Ease of use; Integration fit; Security/compliance; Price/ROI; Support; Vendor trust) — AI follow-ups: 1 on top 2 criteria
  • Q4: How did vendors compare on your top criteria? (Matrix rating 1–5: Rate Us vs Competitor A/B on top 2 criteria) — AI follow-ups: 1 per rating ≤3
  • Q5: What nearly changed the outcome? (Open-ended) — AI follow-ups: 2
  • Q6: What proof points mattered? (Multiple choice, multi-select: Case studies; Pilot results; Security docs; References; Pricing flexibility; Roadmap) — AI follow-ups: 1
  • Q7: Where did we create doubt? (Open-ended) — AI follow-ups: 2
  • Q8: How did pricing land with stakeholders? (Rating scale 1–5: 1=Too high, 3=Fair, 5=Bargain) — AI follow-ups: 1
  • Q9 (if lost): What would we need to win a rematch? (Open-ended) — AI follow-ups: 2
  • Q9 (if won): What must we deliver in first 90 days to confirm value? (Open-ended) — AI follow-ups: 2

What to do with the results: Update your CRM competitor fields, refine battlecards, and adjust pricing notes. If you won, create a success plan for the first 90 days.

Why this works: The 7–21 day window gives buyers time to decompress but keeps details fresh. Asking losers what would change the outcome and winners what would confirm value gives you roadmap priorities that actually match how deals get made and renewed.

5. Weekly Employee Check-In

Goal: Replace status meetings with a pulse that surfaces team health, blockers, and wins.

When to trigger: Weekly, per team or company-wide.

Questions:

  • Q1: How was your workload this week? (Rating scale 1–5: 1=Overwhelmed, 3=Manageable, 5=Light) — AI follow-ups: 1–2 if rated ≤2
  • Q2: What's your top win this week? (Open-ended) — AI follow-ups: 1
  • Q3: Any blockers we should remove? (Open-ended) — AI follow-ups: 2
  • Q4: How clear are priorities for next week? (Rating scale 1–5) — AI follow-ups: 1–2 if rated ≤3
  • Q5: Team collaboration this week felt… (Multiple choice: Great; Good; Mixed; Poor) — AI follow-ups: 1–2 if answered Mixed or Poor
  • Q6: Do you have what you need to do your best work? (Multiple choice: Yes; Mostly; Not yet) — AI follow-ups: 1–2 if answered Mostly or Not yet
  • Q7: Any feedback for your manager? (Open-ended, optional anonymity) — AI follow-ups: 1
  • Q8: Well-being check (Rating scale 1–5: Rate each: Energy; Focus; Work/life balance) — AI follow-ups: 1 per metric rated ≤3, max 3 total
  • Q9: Anything else on your mind? (Open-ended) — AI follow-ups: 1

What to do with the results: Auto-route blockers as tasks to the right owners. Track well-being trends over time. Alert managers when red flags appear.

Why this works: Weekly cadence catches problems before they snowball into burnout or turnover. Starting with a win question sets a positive tone, and the optional anonymity on manager feedback gets you honest input without creating awkwardness.

Key Takeaways

  • These five templates cover the most common feedback scenarios: onboarding, feature adoption, churn, win/loss analysis, and employee pulse checks
  • Each template specifies question types (open-ended, multiple choice, rating scales, matrix ratings) and how many AI follow-ups to use
  • The number of AI follow-ups varies by question importance and answer type (typically 1–2 for focused probing)
  • Post-interview actions matter: route insights to owners, update your CRM, trigger automations, and close the loop with respondents
  • Start with one template, customize the questions to fit your product, and launch within an hour

FAQs

Can I mix question types in one interview?

Yes. Combining multiple-choice questions for quick segmentation with open-ended questions for depth gives you both quantitative and qualitative data in one flow.

How many AI follow-ups should I allow per question?

Start with 1–2 for most questions. Allow 0–1 for simple yes/no questions. Use 2–3 for critical open-ended questions where you need maximum depth.

Should I make interviews anonymous?

For employee check-ins or sensitive topics, yes. For customer feedback where you might want to follow up personally, no. You can also offer anonymity as an option and let users decide.

How long do these interviews typically take to complete?

Most complete in 3–5 minutes with AI follow-ups. Voice mode cuts that to 2–3 minutes since talking is faster than typing.

Can I use these templates for different industries?

Yes. The structure works across B2B, B2C, SaaS, healthcare, education, and more. Just swap the feature names, integration options, and terminology to match your product.

What if I want to combine two templates?

You can, but keep total questions under 10 to avoid drop-off. For example, combine onboarding questions with feature adoption if you're launching a new product and want to track both setup and early usage.

More content maybe you’ll like

Subscribe to our Newsletter

Product tips, user feedback, and roadmap. Weekly, straight to your inbox.

Book Now
AI Conversations