12 Tips to Improve your UX Usability Testing Technique

This article was originally written by Paul Olyslager. Usability testing is a technique used to evaluate a product, such as an application, website, book, etc., by observing people using it. The goal is to discover usability problems, collect quantitative data (e.g. time on task, error rates), and determine the participant’s satisfaction with the product. I’ve gathered 12 tips to sharpen your Usability testing technique, which is key to discover more errors and areas of improvement in your product.

1. Setting clear criteria for participant recruitment

Recruiting the right participants is key for effective user research, because your research results are only as good as the participants involved. You should deny participants who have conflicts of interest (working for your client or competitor), who have inappropriate computer and Web experience (too little or too much experience unless it is appropriate for the project) and those who are not very expressive.

2. Amount of participants

A long time ago (2000) Jacob Nielsen wrote that only 5 participants are necessary for a valuable usability test and that the gained insight diminishes rapidly after the fifth. Usability.gov determed this number with the help of a formula, which is not that different from Jacob Nielsen’s number. Usually 3 to 5 respondents per round are enough to encounter many of the most significant problems related to the tasks you’re testing. It’s pretty much a certainty that you won’t uncover some of the serious problems in a given round of testing. That is why you’ll be doing more than one round.


3. Mention your objectives clearly to the user

Put the candidates at ease and run them through the software tools and equipment. Explain the objectives of the test, how long it will take and how the gathered data will be used. Inform the participant that you are testing the product, not the participant’s skills. Respondents have a tendency to attribute failure in the task to their own incapability, rather than a flaw in the design. Tell them they can’t do anything wrong. In fact, the more mistakes they’ll find while testing, the better. Stress this point more than once so test participants understand it clearly.

4. Choosing tasks carefully

Set tasks that are essential to the success of the new website or application, such as buying products, paying bills or contacting the client. If these ‘top-tasks’ are not clear to you, you could always ask the client which questions your research will need to answer. People also tend to perform more naturally if you provide them with scenarios rather than instructions. Instead of asking them to find the contact section of your application, you could phrase it like a scenario. For example: “You fell down from the stairs and had to call the ambulance. You’re wondering if your medical insurance is covering this and would like to contact them – Find the telephone number”. A scenario provides some context and supplies information the user needs to know, but doesn’t (e.g. username and a password for a test account). It’s important not giving away any clues in the scenario.

5. Ask your respondents to think aloud during the test

Think-aloud protocols, or TAP, involve participants thinking aloud as they are performing a set of specified tasks. Ask them to say whatever they are looking at, doing and feeling as they move through the user interface. This method has several advantages. You’ll know what your users really think about the design which could turn into actionable redesign recommendations.

6. Do not interrupt the flow of the participant’s thought process

Shut up and let the participants do the talking. This is not the time to interpret their actions and words. As an observer or moderator, you should listen and take notes.

7. Don’t lead the user

As a facilitator you should stay neutral, meaning you shouldn’t influence your respondents or lead them to a desired result (consciously or unconsciously). If you do, your testing will lose its credibility. For example, when a test user is testing a sequence of screens and should click a button to continue, you shouldn’t point out to the button or even mention the message on the button (“Continue”). Although a very difficult point for the moderator, allowing the tester to struggle is important and brings massive benefits. If you’re being asked what they should do, respond with “What do you think?”. The answer is very valuable.

8. Have the confidence to stop a user and refocus them on the task

Some respondents have the tendancy to loose track of what they were doing. Repeat the initial question or task to get them back on track. Do not lead the user.

9. Something about note taking

If you are the facilitator of the session, you shouldn’t be the one who is taking notes. Instead, get the observer(s) to take notes. Give them specific things to look for. If you’re both the moderator and note-taker, stop taking notes about things your not going to report on (either because of time or scope issues).

note taking

You could also try Morae’s data logging tool. Techsmith made an excellent video about data logging with Morae and David Travis wrote about exporting this data to Excel.

10. Plan to quantify your results

When gathering data, it’s easy to ask questions like “Did you think the navigation was clear?”. You’ll probably get a ‘yes’ or a ‘no’, but how will you quantify these responses? To help you out, psychologist Rensis Likert came up with the ‘Five-Point Likert Scale’, in which the respondents specify their level of agreement or disagreement. The format of a typical five-level Likert item could be:
  • Strongly agree
  • Agree
  • Neiter agree nor disagree
  • Disagree
  • Strongly disagree


11. Prioritize, rank and list

Ask your subjects to prioritize, rank or list their answers, this instead of asking questions which will be answered with a “yes” or a “no”. For example: “What are the three things you noticed on the homepage?” If none of them point out to the section which is important to you and the product, you should think about reorganizing the homepage.

12. Keep the tests short

The length of the test depends on many factors such as scope, amount of participants, the number of tasks, the duration of each task … which is why a test can range from 15 minutes (for a single page design) to over one hour (full Website design). Studies exceeding 30 minutes have a higher participant drop-off because you are likely to lose their attention. Do you have some useful tips on Usabibility Testing? Some useful resources Books:
  • Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems – by Steve Krug
  • Handbook of Usability Testing: Howto Plan, Design, and Conduct Effective Tests – by Jeffrey Rubin
  • Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics – by Tom Tullis
  • Other related books
  • Five Second Test – Landing page optimization for your mocks and wireframes
  • User Testing – “The fastest, cheapest way to find out why users leave your website”
  • Chalkmark – Online Screenshot Testing Software
  • Silverback – Guerrilla usability testing software for designers and developers
  • Facilitating a Usability Test – Christine Perfetti has several video tutorials where she addresses the role of the facilitator in a test.
Paul is the creator and editor of paulolyslager.com, a blog about User Experience, Usability and Design. Currently he’s working as an interface designer for a newspaper publisher, doing A/B testing on the website and optimizing newsletters. He’ll be moving to Berlin in May 2013 and looking for new opportunities.