The questions that get answered, and what to do with the answers.
Most hotel surveys are too long, sent too late, and read by no one once the score is logged. A good survey is short, lands while the stay is fresh, and feeds something that actually changes. Here is how to build one.
Two windows work, and they do different jobs.
An in-stay check on day one or two catches problems while you can still fix them. A guest who tells you the shower is cold on the first morning is a guest you can still make happy. The same complaint in a post-stay survey is just a bad review you read after the fact.
A post-stay survey within 24 hours of checkout captures a sharp memory. Send it a week later and you get vague answers, or none. The stay has blurred.
Run both if you can. The in-stay check is for recovery. The post-stay survey is for measurement.
Keep it under six questions. Mix one or two ratings with a few specifics, and leave room for the guest to actually say something:
The last two are where the useful stuff lives. Ratings tell you there is a problem. Open answers tell you what it is. A survey that is all ratings gives you a number and no idea what to do with it.
Four things:
They are often used interchangeably. They are not the same thing:
CSAT tells you about the stay that just happened. NPS tells you whether that guest will come back and bring others. Most hotels track both, and you can with two questions. The full definitions are in the Viqal glossary on CSAT and NPS.
This is where most survey programs fall down. The score gets logged and nothing moves. A few habits that fix that:
An in-stay survey that runs through your guest messaging keeps the response and the follow-up in one place. A flag comes in, a staff member picks it up, the guest hears back before checkout. When the survey runs through an AI agent, the routine "everything was great" responses get acknowledged automatically, and only the flagged ones reach a person. Built into your wider hotel automation, the survey stops being a report you read later and becomes something that changes the stay while it is happening.
That loop, ask, catch, respond, is the whole point.
Six or fewer. Short surveys get finished; long ones get abandoned. Use one or two rating questions, like an overall score and a recommendation score, and two open-text questions where guests can describe what actually happened. Every question past six costs you completed responses without adding much you will act on.
Two moments work. A short check on day one or two of the stay catches problems while you can still fix them. A post-stay survey within 24 hours of checkout captures a fresh memory. Surveys sent a week after checkout get vague answers or none, because the stay has already blurred together.
CSAT measures how satisfied a guest was with the stay that just happened, a snapshot. NPS measures how likely they are to recommend the hotel, which reflects loyalty and is best read as a trend over time. They answer different questions, and most hotels track both with two survey questions.
Usually because the survey is too long, sent too late, or on a channel they ignore. Surveys also lose responses when guests flag an issue and hear nothing back, so they stop bothering. Keeping it short, sending it while the memory is fresh, and running it on a channel like WhatsApp all improve response rates.
Act on them quickly. Route low scores to a staff member the same day with the guest's comment attached, so a recoverable stay gets recovered. Tag open answers by theme to spot patterns rather than treating each as a one-off. And direct satisfied guests toward a public review. A logged score nobody acts on is wasted effort.
Yes. An in-stay survey can be sent automatically through guest messaging, triggered a day or two into the stay. An AI agent can acknowledge the routine positive responses and route only the flagged ones to a person, so the survey scales without adding work. The automation handles volume; staff handle the cases that need judgment.