At graze we understand the value of allowing our customers to control and personalise their relationship with us. Perhaps the best example of this is the way we allow our customers to customise their boxes by rating our snacks. This gives us not only the ability to give the user a better product, but gain valuable quantitative insight into what’s trending across our range and why. We also know that it’s important to gather more insight about our customers and user testing is a great way to gather this insight.
adapted from source
Our user testing has often been about understanding new products, our brand messaging performance and the engagement we have. Often sessions like these end up with the user feeling like it’s a test on their skill or ability, they can feel anxious or under pressure in these situations. Other issues occur when the moderator is biased to a feature or element, they can positively or negatively affect how the user feels about the task in hand.
Our main concern was that user testing is normally task led and the moderator will be given a range of tasks for the user to complete, often dictated by the project developers or business goals.
A goal-focused technique isn't very natural
For example, a customer could be on one of our product pages which the moderator knows very well. The moderator asks the customer to purchase a particular product, a question that is extremely presumptive.
The main assumptions are:
1. The Customer knows this is a page where they’d buy that product.
2. The Customer would want to make a purchase at this point in their journey
The question creates a situation in the customer’s mind that may not be natural. The feature/page could be brilliant at selling that particular product, however if the customer is not in that mindset at the time then the feature will be wasted.
We knew we could learn a great deal more from the natural journey and the learning and discovering they perform before and after the purchase, rather than specifically curated situations.
We know where we would like the customer to end up, the tasks they’ve hopefully performed and the ideal outcomes. With that in mind we started to work backwards from the end goal, noting the ideal journey we’d like them to take and journeys they may choose to take on their own.
We wanted to use both open and closed questions to guide the user to their task when we felt they maybe going off track.
Avoiding questions such as, "do you think you’re on the right page" or "how else would you complete this task" is really important. This style of questioning instantly makes the user feel like they’ve done something wrong. It's not the fault of the user, the fault is with the feature for not being easy enough to understand or clear enough with it's purpose.
Using questions such as "what do you think is happening here" and "what do you expect to see if you click there" allow the moderator to gather an understanding of what the user is thinking, giving valuable insight into how the feature could be improved.
We use this questioning technique to form a framework and toolbox to conduct our testing. We try to guide the user in the most natural possible way to gather insight in what they did, how they felt, what they said and what they understood. This is important info that can help power empathetic design.