UX testing methods—on-site vs. remote

To get the most out of the test, first you need to decide which method is best for your project. Broadly speaking, UX testing can be divided into two main categories: on-site and remote.

On-site usability testing

On-site testing requires both the participants and the moderators to be present at the same location for the duration of the test.

Although it offers clear benefits to the moderator, such as the ability to control the session and observe the participants’ natural behavior, it can be costly and inconvenient.

Remote usability testing

Running a remote test can save you money and offer a broader insight into your target audience. Since it’s done online, you can engage users from virtually anywhere in the world without any travel costs. Participants get to complete the tasks from the comfort of their homes and using their own devices. This can provide you with the feedback you need at a fraction of the cost.

Remote usability testing might prove particularly beneficial when you struggle to find suitable participants or you’re pressed for time. It might also be the only viable solution when testing particular types of products.

Remote UX testing is not without its disadvantages, though.

                         
  1. The feedback you receive during a remote session could be less detailed, since the participants might feel awkward thinking out loud as opposed to speaking to a person sitting next to them.
  2.                      
  3. You might also miss out on a wealth of non-verbal cues such as facial expressions and body language.
  4.                      
  5. Technical issues might be another stumbling block. If your participants encounter any problems they don’t know how to tackle, they’re more likely to give up on the test.  
  6.                    

However, in many cases remote testing will provide more than enough feedback for it to be successful.

For the purpose of this article, we will assume that you’ve opted for a remote UX test and describe in more detail the steps you need to take to make sure it meets your needs.                    

Moderated vs. unmoderated remote usability testing

Once you’ve decided to run a remote usability test, you need to choose between a moderated and unmoderated one.

Moderated remote usability testing

In a remote moderated session, the facilitator watches the users complete the tasks and communicates with them in real time. This method allows for immediate contact in case the participants require clarification or additional instructions.

A moderated usability testing session typically makes use of remote conferencing tools with screen share and video or audio connection. Participants are encouraged to think out loud as they work on the tasks and provide immediate feedback.

The moderator can ask for further feedback on a particular step or intervene in case the participant doesn’t understand the instructions or faces a technical problem. However, frequent interruptions are not recommended, as they might distract the users.

Unmoderated remote usability testing

When time is of the essence, an unmoderated remote session might be preferable. It doesn’t require the presence of a moderator while it takes place and it’s particularly useful when you need to collect large amounts of data in a short period.

Data produced during the test is captured either through a click path or video and can be reviewed at a later date.

A clear benefit of running an unmoderated test is that it allows you to engage a large or widely-dispersed audience at a very low cost. However, given the absence of a moderator, some participants might get distracted and fail to complete their tasks.

Remote usability testing tools

When using this method, it’s particularly important to select an appropriate tool.

Here are some platforms that allow you to run both moderated and unmoderated remote usability tests:

                         
  • UserLook,
  •                      
  • Userlytics,
  •                      
  • LookBack,
  •                      
  • UsabilityHub,
  •                      
  • Hotjar,
  •                      
  • TryMyUI,
  •                      
  • UserTesting,
  •                      
  • UserZoom.
  •                    
Pros and cons of moderated and unmoderated usability testing

Below you will find some pros and cons of moderated and unmoderated UX tests.

Pros of remote moderated usability testing
                         
  1. the facilitator can intervene to modify tasks or clarify instructions and ask follow-up questions;
  2.                      
  3. participants are less likely to get distracted;
  4.                      
  5. talking to a moderator might feel more natural than talking to yourself;
  6.                      
  7. allows for flexible length.
  8.                    
Cons of remote moderated usability testing
                         
  1. moderators can inadvertently influence the completion of the tasks;
  2.                      
  3. a moderated session might disrupt the users’ natural thought process and influence the pace;
  4.                      
  5. might take longer and be more expensive than an unmoderated session.
  6.                    
Pros of remote unmoderated usability testing
                         
  1. provides fast results at a relatively low cost;
  2.                      
  3. participants are in their own environment which can translate into more natural behavior;
  4.                      
  5. no recruitment needed (if you use testing services provided by the platform);
  6.                      
  7. particularly useful when you need to collect large amounts of data.
  8.                    
Cons of remote unmoderated usability testing
                         
  1. fixed-length test sessions, as set by the platform;
  2.                      
  3. no space for the moderator to get involved to clarify task instructions;
  4.                      
  5. the selection of users might be unrepresentative of your target audience;
  6.                      
  7. users might get distracted;
  8.                      
  9. participants might feel unnatural having to talk to themselves—quality of feedback might be affected;
  10.                      
  11. not suitable for long and complex tasks.
  12.                    

                   

How to run a remote usability test

So how do you actually go about running a remote usability test? In this section you will find out what you need to keep in mind to maximize the benefits, regardless of what method you pick.

When should you run a remote usability test?

Although you can run a usability test at any stage of the development process, it’s particularly useful to do so early.

Ideally, you should test a working prototype of the product you’re developing as it’s being built. Running a test on the initial design will ensure you’re on the same page with your client when it comes to the basic assumptions of the project, and help you clear up any misunderstandings.

Testing should be an iterative process, however. Running smaller tests more often—for instance after every sprint—is preferable to running a complex one once. This will allow you to refine the design and prevent usability issues from popping up down the line.

Remember that sessions don’t need to take long to produce good feedback. Remote tests typically last between 60 and 90 minutes. Allow some time at the beginning of the session to explain the tasks to the users and answer any questions they might have. Make sure you’ve also reserved enough time for thorough feedback towards the end of the test.  

How do you recruit remote usability test users?

Whether you choose to recruit the participants yourself or outsource the task to an external agency, you should have a basic screening procedure in place to make sure you have the people you need. This could take the form of a simple questionnaire or a quick phone call. By now, you should already have a clear idea of your target audience persona, so try to recruit users that fit the criteria.

Although it’s great to have an experienced user on board, it’s equally important to recruit participants with varying levels of familiarity with technology.You want to make sure that your tool is easy to use for everyone, and not just experts.

You might be tempted to recruit users from among your friends and colleagues to reduce costs. But bear in mind that unless your participants are representative of your target audience, you might end up with wrong or biased feedback and, which might result in additional costs down the road.

If you struggle to recruit the right users, remember that there is no universally agreed-upon number of participants needed to make the test successful. In fact, running large, elaborate tests when there is no obvious need to do so might actually prove counterproductive.

According to research by the Nielsen Norman Group, a leading UX consulting group, the best results come from tests that involve no more than 5 participants. Any more than that and you risk wasting your time and money by getting the same observations.

Since your goal is to improve the design of your project and not just to report its weaknesses, your resources and energy would be better spent on running multiple tests with a smaller number of users.

Remember to emphasize that what is being tested is the product, not the user. This will remove any unnecessary pressure and set realistic expectations. Explain that not being able to complete a task is not a failure but an additional piece of valuable feedback that will help you make the product more user-friendly.

Finally, don’t forget that building a good rapport with your users is key to making them feel relaxed and keen to share their observations with you.

How to draft a good usability test scenario?

A scenario is an indispensable element of every usability test, as it provides the users with a set of instructions necessary to complete the tasks.

Before you draft one, figure out what specific research goals you aim to achieve, and whether the session will be a quantitative or qualitative study.

When designing quantitative tasks, the instructions should be clear and focused to make sure all users perform exactly the same activity. For instance, you could set the following task: “Order two small pizzas, two drinks, and a large side of chips.” Phrasing a task this way will allow you to capture a specific metric, such as the error rate.

Qualitative studies, in turn, are designed to understand the users’ personal perception of the tool and record their observations. The questions you ask should be open-ended and leave room for the users’ own interpretation and insights.

To get a holistic picture, consider combining both methodologies in one test. Remember that the tasks should be realistic enough to resemble a natural usage of the tool. Avoid precise, revealing instructions, and let the user figure out the solution by themselves. For instance, say “order two pizzas” instead of “go to the Takeaway section and then click on the big red button to order pizzas.”

What metrics should you use to evaluate a test?

How do you know whether a session has been successful?

We believe that every piece of feedback you gather during a usability testing session is valuable information about the product. There are no “good” or “bad” results. A user failing to complete a task—as long as the instructions were clear—is just as useful in terms of feedback as a success.

When evaluating a session, select the indicators that reflect the priorities set out in the scenario and the intended use of the product. To get a good picture of the usability of the product you’re testing, the indicators should be both qualitative and quantitative, and could include, for instance:

                         
  • error rate,
  •                      
  • completion rate,
  •                      
  • time spent completing a task,
  •                      
  • satisfaction rate.
  •                    

It’s not time-effective to measure every single aspect of the product, so be intentional about the metrics you use.

                     

Analyzing remote usability test user feedback

You’ve successfully gone through the various stages of setting up a remote usability test and gathered user feedback. How do you analyze the findings, and how do you know which suggestions to implement?

If your scenario was comprehensive, you probably collected a significant amount of data. First, gather it all in one place and structure it in a way that allows your team to access and understand it easily. Group the inputs and prioritize them based on their importance to the project. You might want to separate easy-to-fix bugs from issues that require a more thorough discussion within the team.

Prioritization will also help you establish the right timing. In our previous tests, we used the Impact, Cost, and Effort score to determine the significance of each issue and any potential solutions.

Analyzing the results is also a good moment to establish common ground within your team with regard to project requirements and remove any inconsistencies.

Final thoughts

There are few things more expensive than your developers’ time these days. If your aim is to manage it in a cost-effective manner, you can’t afford not to test your product as early as possible.  

Testing doesn’t have to mean setting up a complicated lab test, or making travel arrangements. You can take advantage of the large variety of tools available online to help you run a successful remote usability test that provides meaningful results at a much lower cost to your business.

Following our suggestions above will help you maximize the benefits of remote usability testing. If you’d like to find out even more about UX design and analytics, take a look at our other blog posts:

Also, check out our Portfolio to find out what our Product Design experts have been up to and take a look at the results of their work on Behance.

If you have any questions, feel free to leave a comment below or get in touch with us. We would be delighted to support your project with our extensive expertise in UX design and analytics.