Good design is good business — Thomas J. Watson
This story is about how we decreased 1 & 2 star feedback by 9%. How we are helping central operations team at OYO to filter out on an average 8500 rooms from the system per month which can potentially lead to bad customer experience.
OYO has deployed a highly capable on-ground team across 200+ cities to primarily ensure ‘best in class guest experience’. Along with this they manage inventory, engage with guests, provide support to owners and oversee operations as and when required. We call them ‘captains’, they are the ‘Superman’ for our team and this story.
In a highly price-sensitive ecosystem of budget hotels in India, to ensure a ‘best-in-class experience’ for our guests it’s inevitable to pre-emptively & regularly evaluate the inventory. To facilitate this evaluation, We use ‘Audits’ as a major process. An audit is essentially a questionnaire, a kind of checklist which helps our captains to tally standards and register their observations. It covers multiple grounds from the cleanliness of a property to minuscule details like whether the manual latch of a room is working or not.
So audits helps captains to
- Identifying the issues with the rooms and property
- Filtering out the bad rooms from the system
- Update the owners and concerned stakeholders at OYO about the malfunction and get them fixed
So for this story ‘Audits’ will be our central plot. What’s beyond Audit and how our captain carries out other responsibilities is something we can discuss later.
What were the gaps in the ‘Audits’, how we helped!
As per data analysis, we made an observation that the correlation between ‘ audit’ and guest experience was missing. This translates to, despite regularly having audits in some properties guest experience was the same as compared to properties where audit completion rate was poor. Which clearly indicated that audits weren’t as effective as we expected. We conducted thorough field research to identify the gaps and tried to fix them
1. Credibility & quality concern
Observation :- By analyzing the data we made an unsettling observation that captains were taking less than a minute to complete the audits comprising of around 200 questions in some 2500 cases. Further, during our ground research, we observed that in most of the scenarios captains were answering questions without inspection. Hence, making the process less credible
Solution 1:- We displayed the negative guest feedback along with the questions as illustrated in the image below. This not only enabled a two way communication between the system and the captains but also drew captain’s attention towards the recurring issues. Paying attention stimulated better examination of problematic areas non-intrusively.
Solution 2:- Along with guest feedback we also incorporated housekeep’s response through checklists they fill after check-outs. This provided one more layer of feedback and check in the system. For eg. inconsistency between the response of cleaning staff and captain would trigger a notification to the property managers.
Note : In this case credibility concern wasn’t a result of wrong intentions of our captains but a result of bad system and unsupportive product, How ? its point no. 2,3,& 4 the other major gaps in the system.
2. Recommended rooms
Observation :- To complete an audit captain has to audit at least three rooms. On the ground, we observed that to complete the task captains have to procure keys from reception where property managers were deciding which rooms should be picked for the audit. In most of the cases, we observed that the best rooms were getting picked for the audits hence a compromise was made with the quality of the audits.
Solution:- We started recommending 6 rooms for the audit on the basis of the absolute count of the issues. If the count is the same for multiple rooms, then the room which was not audited for a long time would be recommended first. This leads to the worst room getting picked up for the audit instead of the best rooms.
Impact :- Now on an average 83% of the rooms audited are recommended rooms given among 6 recommended rooms checked-in rooms are also included.
Impact :- Though an audit can be completed by just auditing 3 rooms but now on an average 3.5 rooms are audited per audit. Given somewhere 7000 audits are completed every month (and increasing) . 0.5 improvements leads to some 3500 extra rooms getting audited with same bandwidth.
Saving development time — when we designed this feature we had an option to develop a logic which removes the checked-in room from the list of recommended room, create ranking service which ranks a rooms on the basis of factor like count of feedback, when the room was last audited along with severity of issue type & date of last feedback received. This could have allowed us to make it mandatory for the captains to pick rooms only from the list of 6 recommended rooms or could have showed ‘mandatory to audit 3 rooms’ .
But we concentrated on a fairly simple logic for phase 1 release to significantly reduce the development time. We first wanted to observe the unforced/natural adoption of the feature and then decide how much engineering effort we should invest.
3. Redundant and tedious questionnaire
Observation :- Captains were responding to a fixed set of 200 plus questions all the time they conducted an audit. Guest feedback wasn’t being accounted for framing the audit.
For eg. If one room has reports of significant no. of washroom related issues where as other room has colossal amount of AC related issues but the questionnaire for the both of these room was same. This made the process monotonous, hence captains lost the trust from the process.
Solution 1 :- lets take an eg. a washroom related section of an audit would have two set of questions
Priority 0 — Basic questions which are always asked.
Priority 1 — Questions which are triggered only if washroom has been among the top 3 issues of a room (based on last 14 days data)
This ensured captain answer fewer questions if there are fewer issues in the room. If no issue has been reported in a room by the guest then captain ends up answering few ‘Priority 0’ questions only.
Solution 2:- To further reduce the redundancy from the system, We created a logic so that fewer audits are triggered if properties are performing good in terms of guest experience and kept a cap of maximum 8 audits per week per captain. This logic is based on absolute count of unhappy feedback by the guests and when the property was last audited.
Impact :- There has been a significant improvement when it comes to reducing the unhappy feedback from the audited rooms .We track this through audit quality score. There has been an improvement of 35% in ‘audit quality score’ from the launch of this product till month of June.
Audit quality score is defined as the percentage of unhappy guest who have stayed in the audited rooms in duration of 10 days from the date of audit.
4. Lack of visibility of vital information
Observation :- For system to work efficaciously stakeholders should be able to keep a track of vital metrics and analyse them to identify the problems. In our case, the product they were using lacked this visibility. When we interviewed some of the high performing self motivated captains we observed that some of them were maintaining their own dashboards on Microsoft Excel to keep the track of basic metrics like % unhappy guests. They were using these data points to plan their next moves. Most of the captains on other hands lacked this capability.
Hence captains were not able to act pre-emptively and plan to maintain a healthy guest experience.
Solution:- We created a property performance report for captains which provided the visibility of key metrics affecting the guest experience. Also, consolidated the insights related to the issues at the property.
5. Fighting muscle memory
Observation :-Our captains developed muscle memory around questions. They knew question sequence and what to respond in order to complete an audit quickly without reporting any issue hence creating tasks for themselves. Hence, captains were not going through the audit thoroughly.
Solution :- We started asking the same question in different language as shown in image below
6. Prioritisation of work and efficiency
Observation :- When we shadowed some of the captains we observed that captains spent most of their time in travelling from one property to another property. So, aggregate time spent in performing the tasks was very less as compared to the total travel time.
Moreover, due to bad design and the way we listed the tasks in version 1 captain used to miss out certain tasks at a property and were travelling back in order to complete the remaining task. This was adversely affecting the efficiency of our captains
Solution 1 :- We clubbed all the tasks at a property so that if captain wants to complete multiple tasks at once he can do it.
Solution 2 :- Every task was assigned with a turn around time (TAT). Property which has the task with least TAT was prioritised among other properties. It became easier for captains to schedule their day.
Solution 3 :- We also provided an option to sort tasks on the basis of distance from the current location.
Saving development time — We had 2 more design options for solution no.3
1. To representing tasks on the map from current location which could have made it easier for the user to comprehend information.
2. To show suggestions of nearby property on the top of the task feed after user has completed his first task (we tag geo-location whenever an audit is completed).
In MVP we wanted to test whether captains who are always on field require this feature or not and then decide on engineering effort to be invested. Hence we went ahead with simple sort option.
Impact :- We have increased no. of captains by 50% and have doubled up the properties but no. of audits per property per month have reduced .
Impact :- Earlier every captain used to audit 8 or more properties but now on an average 4.3 audits are triggered per week per captain. Hence bandwidth of captain is available for other tasks
Impact :- Adoption of Krypton v2 is 95% (95% of audits triggered are completed in given time).
Impact :- When we calculate difference between % delight and %unhappy from the feedback collected 3 or 5 days before audit and 3 or 5 days after audit in audited rooms . We noticed positive delta of 2%.
The team behind….
Android development — Mohit Rana, Vikas Udasi
Back-end development — Manpreet Singh Bedi, Vikas Sharma, Akash Kumar, Vishnu Poddar
Central operations — Neha Bhattacharya
Engineering manager — Ishan Bansal
Product manager — Harshvardhan Singh
Product designer — Vikas Goel (me)
All the captains who participated in the sessions enthusiastically and helped us in conducting the research.
— Thank You