Helping participants in usability tests

by Mitch Malone   Last Updated January 13, 2018 15:16 PM

I've just completed a round of lab-based, co-present usability tests as part of a usability audit for a web-based map application. I have ten tasks for the participants that require them to perform typical map activities (zoom in on this area; find this location, measure the distance between x and y, etc).

The first task is to zoom in (there is a little magnifying glass in the toolbar that lets the user do this) on the map in a location of their choosing.

But for one of the participants, they simply couldn't find the zoom tool. This is good information because we can take actionable steps to make it more visible. However, a lot of the subsequent tasks required the use of this zoom tool; without knowing how to zoom the test couldn't be completed. So I pointed out to the participant where the zoom tool was. I realize this is a big no-no in usability testing but I felt it was necessary in order to complete the test.

My question is: Should I have intervened and helped the user or should I have ended the test? At what point do you abandon a usability test for the sake of keeping a more realistic context?



Answers 6


We had this exact same issue. The user had been given a task and couldn't figure out how to do it. I patiently waited a couple of minutes for them to try to find it. Eventually, I did what you did and just told them, so they could get on with the rest of the test.

I think that when people say "Don't help the person doing the usability test," what they really mean is, you shouldn't sit there and handhold them, and guide them through it step by step. You're trying to simulate what will happen when your program or website is being used out in the wild.

But, if they get into a situation like you're describing, out in the wild, they're going to give up, go to a different site, uninstall the program, or otherwise discard your product.

As you mention, if this happens, you need to note that problem, because it's a big deal. Especially if it happens with lots of users.

But once you get to that point, there's nothing wrong with cutting your losses, telling them just enough to get started again, and move on to do usability testing on something else while you've got them there. There's no sense throwing out any possibility in learning additional things, just because they get stumped on one thing.

rbwhitaker
rbwhitaker
April 18, 2012 21:55 PM

Maybe just to see the results of further testing, the intervention was a good decision that time, but learn from this information, because either:

  • Maybe it's not good that your tests all rely on one other functionality;
  • If your application does rely mostly on that functionality, it's certainly disastrous that any part of your test group was not able to find and/or use that functionality.

In other words, you should either make your application more usable when that functionality isn't used (even if you make clear where it is accessed, maybe there will always be people who can't operate it properly / without much difficulty), or you should make that functionality stand out more in the UI, by making it draw attention (size, colour, motion, whatever you find appropriate). And then make sure that working with it is child's play.

MarioDS
MarioDS
April 18, 2012 21:59 PM

There are plenty of folks who will say that your test should have ended at that first step -- when you rightly noted that the participant could not get past step one -- and you should just record that data point and move on to the next participant. Nothing wrong with that.

Once I was that tester who couldn't find the zoom tool -- except it was "find the (some esoteric icon representing a) tool in Google Toolbar", many years ago when Google Toolbar was a browser add-on and Google brought a lot of people on campus for usability tests. It was the first question in their test. I said "really, I can't see it. I would imagine it would be here or here or here". They pointed it out to me, much like you did with the zoom tool, and we continued on with the test. I don't know how they used the data, but they did continue on with the same script.

For the sake of ensuring sane data (e.g. all users completing all tests), I wouldn't consider the rest of this user's data (or mine, in the situation above) when calculating results, because to my mind it would be tainted/biased, BUT there's nothing wrong with turning the rest of your time with the user (or potential user) into something from which you could get additional beneficial data. Maybe that's going through the rest of the test and getting answers/seeing actions and just gathering the data but not calculating it or considering it in the same way, but maybe it's shifting the time in the test to a different test.

I would consider your situation similar to that posed in How to rescue a usability test whose participant is lacking confidence?, in which responses included ideas like ditching the canned tasks for a self-defined one, turn the "test" into an "interview," and so on.

Yes, I'm essentially answering "it depends." While I would probably have gone with turning the test into something else, I might also continue the test but not weigh the results quite so much -- depends on how many testers I had in the queue and where I was in the testing process.

jcmeloni
jcmeloni
April 18, 2012 22:17 PM

Of course you should help the user so that you can get as much information out of the usability session as possible. You found one usability issue, but there could be others and those issues could be quite independent (can be addressed individually without stepping back and redesigning the whole thing).

You don't want to engage in N usability tests and N rounds of fixing to fix N usability problems one at a time.

That is time-consuming and expensive.

A usability test is not 100% scientific. If it was, you would make it double blind or something; you would not even be present in the room so that you could not influence the subjects in any way (body language, etc).

You're not trying to publish something in a scientific journal or trying to get a government research grant, etc. (And those people fudge plenty.)

Kaz
Kaz
April 18, 2012 23:35 PM

You don’t end the test, but you don’t point out the tool either.

If someone is hopelessly stuck, then obviously for summative purposes you score that session as “Unable to complete task (without help)” and include it in the No Joy category for statistical purposes. As long as you have consistent rules for judging when the user cannot continue on his/her own, your quantitative data will be perfectly valid, and you can continue the session and collect more data to help inform the design.

The reason for not pointing out the tool is, ironically, to collect more data. You know the user couldn’t find the tool, but you probably don’t know why. In general, an inability to find something on a page may be because:

  1. Users looked at it, but it didn’t recognize the label/icon.

  2. Users looked towards it but didn’t see it because it was lost in clutter.

  3. Users were looking for it somewhere other than where you put it.

  4. Users were looking for something entirely different than what you used.

Each of these reasons has very different design responses. Eye-tracking data can help narrow the possible reasons, but interview data is also often necessary.

So don’t show the user the tool. First, ask questions to diagnose the problem:

  • What are you trying to do? (I need to zoom in)

  • What are you looking for to zoom? (A sliding thingy, like in Google maps).

  • Okay, that’s a good way of doing it, but we’re trying out a different method. What label or icon would Zoom have? (I don’t know. Usually it’s a magnifying glass)

  • (pause)

  • Where are you looking for it? (Right here at the top.)

  • What do you see there? (A printer for printing, the Save icon, a push-pin to mark a point.)

  • Oh. The push-pin is supposed to be a magnifying glass. We’ll work on that.

There. Now you know two ways to improve the design (use a slider if feasible, re-work the magnifying glass image), and four ways not to improve it (changing from a magnifying glass to some other object, making the control bigger or bolder, moving it somewhere else).

This is the general rule for usability testing. For each problem the user encounters, avoid giving the solution. But don't just give up. Instead, ask questions to gather data and, in the process, guide the user progressively closer to the solution. And then continue with the usability test.

Michael Zuschlag
Michael Zuschlag
April 19, 2012 12:15 PM

Our protocol was very simple. Some users start asking questions before they even start to look at the screen (sad but true :)# ). So the first time the user asks, you answer: "Please try and find the solution yourself by looking at the screen" (after a while you can say stuff like that quite convincingly). If the user is still stuck (keep a quiet eye on your timer, and give them one minute or whatever time interval you agreed with the rest of your team) then point out the feature they should be using to them with a gentle but not patronising smile. But before you let them carry on, ask them why they had a problem here. Your user testing results should include the information that the user had a problem [here] and that the reason for the problem was [this]. Don't rely on video or audio recordings, yada, they take hours to review and code (usually 3x as long). Just talk to the user and make a note on your clipboard.

At the end of the session ask your user to retrospect for a minute and to tell you where and why they had the biggest problems, and where they felt they were really shooting along (AKA 'critical incident' gathering). Ahem... make a note of this data: it's your gold-mine.

Of course, this will not give you good data for 'time on task' or any other measures based on performance, but that's a different kind of evaluation.

Jurek
Jurek
January 13, 2018 14:45 PM

Related Questions



Usability testing for open-source projects?

Updated July 28, 2017 05:16 AM