Users complete realistic tasks using your website or app, comment on their actions, moderators guide them, observers watch and listen, others take notes. That’s how a usability test is typically set up. Its purpose is to identify problems regarding the usability, find out how satisfied the participant is, and collect data for making informed design decisions. That’s in an ideal world… But even if you follow best practices like those found at usability.gov, there’s still a lot that can go wrong. Here are seven common mistakes during a usability test you shouldn’t make.
#1 Testing the user
The most important point at the beginning: You are testing the usability of your website or app – not the user! The participants of your test should be aware of that. Why? You want honest and reliable feedback, and the best way to get this is to make your testers feel at ease – at least as much as possible. They should behave as naturally as possible. After all, you are trying to get insights into real-life situations. Usability tests by their very nature put participants in an artificial and often slightly uncomfortable situation. So the first step toward meaningful test results is to overcome this unnaturalness by reassuring your testers that they can do no wrong during the test. Otherwise, you risk that they will feel insecure or won’t be honest and speak out their real thoughts.

Test your product, not the user. (Source: www.lifeofpix.com)
Our advice: cross out the term “user test” in your papers and in your mind. Everywhere. Maybe even start your introduction with “We’re not testing you, we’re testing us.” It’s a usability test! Plus: make your participants feel comfortable. If possible, visit them in their natural habitat instead of using a usability lab. All you really need is at least enough space for all your test participants (test user, observers, moderator) and recording setup. Or you could consider a remote test, allowing your test users to stay right at their desks, especially if your users are located somewhere else.
#2 Testing too much
Setting up a test is exciting, especially if you have lots of new features coming up or want to test different design versions. But don’t try to test everything at once or fitting every possible task into one session! Otherwise it’s likely that you will either tire out your participants or run out of time and don’t get the desired test outcome. Also, consider who you want to recruit for your test: Not everybody in your target audience may be able to afford to spend the time. The typical test session has a length of 60 to 90 minutes at most and contains 2 to 3 test scenarios. So you can’t test everything at the same time. You need to stay focussed!

Focus on roughly 3 test scenarios to avoid testing too much. (Source: kaboompics.com)
To avoid struggling during the test session, good preparation is needed: Create a list of all the things you want to learn from your test prioritizing the most important tasks and questions (see #3). Items with a lower priority can go to an extra list for later consideration if there’s still time left towards the end of your usability test. Prepare some extra minutes to have some time left rather than running out of time.
Another good way to avoid overwhelming test users is planning to run a longer usability test and break it up into several sessions focussing on different aspects.
#3 Asking the wrong questions
Many usability tests start out like this: “Take a look at this homepage and tell me what you think.” or “Take a moment to look around the site.” Don’t waste precious test time on artificial questions like these. You are trying to simulate real conditions in a usability test, but real people don’t come to your site to just browse around. Instead, you are giving away critical insights as you are allowing users to familiarize themselves with your site before the real test has even started. No, people who will use your website or app for real will do so with a specific objective in mind. So you need to make sure to test your design against those objectives. In particular, you should focus on the “red routes” – tasks that are critical for the users and/or the organization and thus should be as smooth as possible. This will also help you stay within the allotted session time (see #2).
Another type of question to avoid are leading questions (e.g. “That new button looks nice, right?” etc.). Instead of learning from the testers and hearing what they really think, you will hear what they think you wish to hear. Instead, ask open-ended questions (e.g. questions starting with W words – what, when, where).

Test your questions to find out if they perfectly fit the test. (Source: unsplash.com)
Finally, be careful how you answer questions the participant might ask you. While you can’t (and shouldn’t) prevent your test users from asking questions, try to answer them with another question. Otherwise, you might be led to compromise the outcome of the test by giving away answers or introducing a bias (e.g. “Clicking here will guide you back. What did you expect to happen?” etc.). Don’t tell them the answer. Instead, ask them for their experience or let them articulate their first impression or expectation. For instance you might respond to a question with something like “What do you want to do?”, “What can you do here?”, “What do you expect to happen?” etc.
To minimize questions from participants in the first place, you should test your questions in advance (see #4).
#4 Failing to do a trial run
To avoid getting sidetracked and ensure comparability of results across test sessions, it is best practice to create a detailed test outline that includes all the important tasks and questions you want to ask, often even the introduction text to read out. A commong mistake, however, is to not test your test. Just like you are testing your website or app, you should do a trial run of your usability test. Even if you have prepared a detailed test guide and feel well-prepared, there are lots of things that can go wrong under real-world conditions. Just a few examples: The tasks you prepared may be unclear to participants, requiring spontaneous explanations that may lead to giving away too much information. Or tasks may simply take longer than expected, putting you and the testers under pressure. Or even worse yet, your equipment might not work the way you expected so you may end up leaving participants waiting or not being able to record things or run the test at all. Or you may have overlooked an inconsistency in the test outline that may trip you up. Better to catch that before the actual test!

Use a guideline and run a pilot test. (unsplash.com)
That’s why you need to do a pilot test to determine how long the test will take and if your planned structure makes sense. Finally, once you have created your test guide and made sure it is well-structured and works, stick to it. Don’t digress from the scenarios and questions you prepared for the test session.
#5 Focusing on what they say
A common method in usability testing is “think-aloud”, i.e. asking the test participants to speak out what goes through their head as the interact with a system and try to solve the tasks given to them. While this type of commentary may give you valuable insight into the user’s state of mind, relying on words can be dangerous. Why? Because what participants say may subjectively be true, but objectively lead to wrong conclusions. First, studies have shown that users are poor at introspecting into their own motivations, so asking the user things like “Why did you do this?” may produce faulty answers. Second, we may get tempted to add questions like “How much would you pay for this product?”, which may work with a large sample, but not yield reliable guidance with the typical test pool of 5-8 participants. Finally, users may feel obliged to be nice to you when really they want to bang their head against the wall out of frustration with your site.

Pay attention to the actions, gestures and mimic of your test user. (Source: unsplash.com)
To avoid this scenario, focus on actions rather than words. Grant participants enough time to express their thoughts and perform actions during the test session, even if this takes a little while, maybe even longer than you expect. Often the participants have a specific mimic or make a gesture. Pay attention to them. Note where participants hesitate, where they seem to be searching for options. Don’t ignore it. Especially, if you’ve already run multiple tests and maybe noticed some patterns, every experience matters! Just because people stop at the same point of a test doesn’t mean it happens for the same reason. Be objective and treat every tester equally. Remind your users to speak out loudly and try to gather as much information as possible, but don’t be misguided by what they say.
#6 Interrupting the usability test
Preparing a test guide is best practice, as we already saw, but it is not sufficient. A common mistake is to interrupt the flow of the usability test by inserting unrelated questions. One example is, to ask participants for ideas for alternative design solutions. Usability tests are supposed to help you uncover problems. To do so they focus on problem behavior, e.g. by giving the testers tasks to perform in order to achieve a specified goal (remember, usability is defined as the “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”). If they fail, this is critical information for you. But if you then ask them what type of design would have helped them succeed instead of continuing to evaluate why they failed, this will switch the entire discussion, interrupt the flow and likely limit your chance of getting to the actual root cause of the issue.

Avoid unrelated questions that may interrupt the flow. (Source: skitterphoto.com)
Stick to your test guide and avoid interrupting the flow of the test session with questions your participants are actually unlikely to be qualified to answer – most users haven’t studied design and don’t know what options are even available. Instead, test different alternative solutions to begin with. This will help you identify design approaches that work and those that don’t.
Of course, you should also try to avoid other distractions that may interrupt the session, such as background noise, technical issues or unrelated questions, e.g. by setting aside some time at the end of the session to address questions by the participant or observers.
#7 No documentation
It’s always about saving time. So often people ignore documenting their findings during a test session, but also mentioning which methodology has been used. Why take notes, if you can have a meeting afterwards, especially if there are other observers in addition to the moderator? But in fact, it should be possible to easily and quickly recap the test session and results need to be visualized for those who didn’t participate in the usability test. Without proper notes you risk missing important results, significantly reducing the value of your test overall. For instance, if you want to figure out how long it takes the user to complete a task or whether she was able to complete the task at all and how many attempts were necessary to do so, you need good data. That’s why a permanent record of the findings is essential.

Document the findings to distill actionable recommendations. (Source: unsplash.com)
To avoid ending a usability test without proper documentation, make sure you have a designated person for taking notes during the session. You can also incorporate space for notes right into your test guide. It might also help to print out screenshots of important parts of the test to keep track of the context of your observations. After each test, schedule some additional time to go through your notes, consolidate findings from the other observers and add any additional information you might have forgotten to capture during the test. Ultimately, you want to produce actionable recommendations for the next design iteration that are based on real user behavior.
Franziska, Thank you for the post. I loved reading it.
Incidentally, my team also drafted a great post on Usability Testing and I believe your readers will love to browse “Usability Testing for Mobile & Web – 7 Lessons Learned”. Here is the link to access the post – http://www.gallop.net/blog/usability-testing-for-mobile-web-7-lessons-learned/.
I would love to know your comments on our blog, Franziska .
Thank you Franziska .
Cheers,
Michael Wade
Thanks for sharing your exprerinces.
Nice Post! User testing is the best way to find the issues in your website and get more conversion rate for your online business. By using user testing you can save lot of money and your time. Thanks for sharing these tools with us!
Nice Post! User testing is the best way to find the issues in your website and get more conversion rate for your online business. By using user testing you can save lot of money and your time.
Greetings,
Great article. This is a very helpful guide and gives a great viewpoint on understanding User Testing. This is a unique approach and was quite a good read.
You might like to take a look at this post on “UAT Testing – What, Why, Benefits, Types”. Here is the link to access the post https://www.testingxperts.com/blog/uat-testing
thank you for sharing great content.
Great article!!
To know why we need usability testing Read: https://www.bugraptors.com/usability-testing-helps-building-apps/
Thank you for sharing this informative list. This will help testers avoiding most common mistakes while user testing.
Very informative article. By the way, https://computools.com/software-development-risks/ might also come in handy for you. Thank you
This article is very informative!