A couple of months ago, the Test Pilot team sat down with six volunteer users, one at a time, and asked them to go through the steps of installing Test Pilot and submitting test results.
(Yes, that’s right: we were testing Test Pilot — feel free to make infinite recursion jokes.)
What we found was extremely valuable. The same problems happened again and again. Six users may not seem like enough to give you useful information, but believe me, after you’ve seen the fourth or fifth user in a row trip over the exact same usability problem, you’ll have a pretty good idea of how high a priority it is. (Statistics rule of thumb: if you have a problem that affects 1/3 of users, then you only need to interview 5 users to have an 85% chance of seeing it).
This was my first time doing user interview style studies. They’re a very different kind of usability research from the massive data collection we do through Test Pilot. One is qualitative and the other is quantitative. One deals with individuals, the other with large groups. The two methods are complementary. You can collect all the statistical usage data you want, but it will never tell you what a user is thinking, or how they feel about your software, or how your software fits into their life, or what the meaning behind their interactions was. On the other hand, if you rely only on individual user interviews, you’ll fall into the trap of anecdotal data: you have some interesting stories, but you don’t know how typical they are of the whole population, or how important one user’s favorite use case is in the big scheme of things. Using both methods together produces a much more complete picture than either one can do alone.
What we found out
Most users weren’t sure what to do next after installing Test Pilot, seeing no clear call to action on the “Welcome” page. Almost everyone was confused by the many options in the Test Pilot menu. They were confused about the difference between a “survey” and a “study”, and why they saw both an “Accounts and Passwords Survey” and an “Accounts and Passwords Study”. They weren’t sure whether a given study was currently running or not, or how to tell what they were supposed to do about any given study. Most people went to the “All Studies” page and were disappointed to see that it didn’t really list all studies, only the ones currently running. Most people missed the notifications, accidentally dismissing them without even noticing that they had appeared. Finally, many people requested a way to be notified when the study they participated in had produced some kind of tangible results.
A funny story: Several of the users in the study believed their data submission was being rejected when they clicked the “submit” button. They got understandably frustrated, asking what was the point of going to all that effort if their data wasn’t going to be accepted.
Actually, the data had been accepted. In fact, Test Pilot doesn’t even have a concept of rejecting a user’s data submission, even after a test ends. So what was the problem? Well, here’s the message that appeared after a successful upload:
Thank you for submitting your data!
This study is no longer collecting data, and the data that was collected has been deleted from your computer.
Oops! What I meant was that the local phase of data collection had been successfully completed, and that since it had been uploaded, the local copy had been wiped in accordance with our privacy policy. But it’s easy to interpret the message as meaning that your data was rejected because the study as a whole was over. A poor choice of words on my part.
In response to these findings, we did a total interface redesign, which has now been released as part of Test Pilot 1.0 alpha.
Tips for doing user studies
- Give the subject a task to do, but don’t explain how to do it. The idea is to observe the process as the subject tries to figure out the interface. We wanted to observe the Test Pilot install and first run process, so we started out with “Pretend you’ve just heard about Test Pilot and want to install it. What do you do?” and we went from there.
- While you talk the subject through the test, have an extra person sit in the back of the room taking notes on everything the subject does. This is much more reliable than trying to remember the important parts afterwards, and much less distracting than trying to both talk and record at the same time.
- Make the subject feel comfortable. People’s behavior changes if they feel nervous, uncomfortable, or self-conscious. You want them to be happy and relaxed so they can focus on the task.
- Never, ever make the subject feel bad about their mistakes! Make it clear that the subject is not on trial here: the software is. If the subject apologizes for screwing up, or if they feel dumb because they can’t figure something out, then redirect the blame away from the subject and onto the software where it belongs!
- Encourage the subject to “think out loud”: the more they can verbalize their thought process — what they’re looking for, why they’re looking for it in a certain place, what they expected to happen vs. what actually happened — the more useful info you can get.
- Resist the urge to tell subjects what they’re doing wrong. Bite your tongue if you have to. The whole point is to see what naturally gives them trouble. You’ll defeat the purpose of the study if you try to steer them away from trouble spots.
- Even if they ask you how to do something, don’t tell them!. Instead, say “I’ll answer any questions you have after we’re finished with the test. For now, please try to figure it out.” Again, the idea is to see if they can figure out how to do the task on their own.
- However, if the subject gets completely stuck, you can give them a little nudge towards the next step. It’s better to continue with the study than to let the subject sit there stewing in frustration.
A more experienced usability tester could probably offer several more suggestions.
April 21, 2010 at 6:24 pm
“Most people missed the notifications, accidentally dismissing them without even noticing that they had appeared.”
This has happened to me more than once. I’ll be in the middle of typing or clicking around a site and I’ll briefly see a notification that then disappears straight away.
Keep up the good work.
April 25, 2010 at 6:21 pm
In regards to Test Pilot, I must admit I really hate the next context menu. I have no idea what tests I’m currently doing or what upcoming tests there are if any. It’s got incredibly confusing and I must admit that I’m hugely disappointed.
April 26, 2010 at 5:49 pm
Hi sabert00the,
I’d welcome more detailed feedback: what do you find confusing about the context menu? How we can make it better? The information about what tests you’re currently doing is now in the new “all your studies” window, under the theory that we can provide much better information in that window than we could have provided in the menu. Is the “all your studies” window not working well for you? Or is it just annoying to have to take an extra step in order to see it?
In regards to upcoming tests, unfortunately we haven’t done a good job conveying that information either under the old interface or the new interface – mainly because we’ve been releasing each test as it is finished rather than officially scheduling them ahead of time.
Thanks for your input!
April 28, 2010 at 4:49 am
I really wish there had been a reference to this blog posting when the 1.0 version was rolled out. That was an enormous change from 0.4, and some idea of the rationale behind it would have been much appreciated.
About the comment from sabert00the, the “all your studies” gives a great overview, but being able to see the active studies at one glance in the menu was a big help.
September 9, 2010 at 2:25 am
Just mentioning that I mistakenly cancelled out of the About Firefox study when all I wanted to do was submit results later. I’d like to be able to re-enable it.
I’m here letting you know this because I think what you’re doing is a good thing:-)