There’s a lot of talk these days about being a user experience designer — what it means, who does it, who doesn’t do it, how to break into the field. This kind of dialogue is illuminating at times — contentious at others — and, honestly, I’m not a big fan of talking about it like that (I think it can get quite silly actually). But I do like talking about the different things you typically find within the user experience field and how to apply them to your work.
Usability testing is one of the most interesting aspects of our field. It’s something we incorporate in our work at Emma, but haven’t shared much about until now. In the coming months, we’ll be talking more about usability testing here and over on the main Emma blog.
Today, I’ll share how we run our usability tests at Emma. We don’t have a fancy lab or anything (not that there’s anything wrong with that), yet we have a fairly established process around testing new product features. It’s a mixture of some typical usability testing protocol and some guerilla techniques (we’re such rebels).
Have something to test
It may be a bit obvious to mention this, but we need something interactive to test before we get started. This can be a prototype, functional front-end code or even an early development version of a feature. We prefer to show the new feature in context. But if we don’t have that, we can fake it with a little digital camouflage. We’ll take a screenshot of the UI where the new feature will be implemented, clear out everything we don’t need in Photoshop, and use that as a background image for the prototype. Not everything on the screen works, but at least it looks closer to the final product.
Know what you want to test
There are usually specific tasks participants need to be able to accomplish with a new feature. We document these tasks — from very small tasks to more complex ones — and write non-leading questions to see if participants can accomplish them. We also write a few sentences that describe the context of use and how they might get to the feature if it’s part of a larger workflow. (Again, context is so important!)
Prep the testing environment
We make sure the environment we’re testing in has everything we need to easily reset after each session. If it’s a prototype or front-end code, that means we refresh the browser after each session. If it’s a development version, we might need to clear out data on the server that was created during the previous session. We want to make sure each participant starts with the same experience. It lends structure to the test — and helps us get reliable results.
Record each session
We utilize Silverback to capture video and audio. In short, we want to see and hear how the participant interacts with the feature. If you haven’t used Silverback before, it’s pretty straightforward. We load up the feature we’re testing, start Silverback, create a new project, create a new session, center our mug for the camera, and hit the space bar (this will make sense if you watch the demo). We hold usability tests in a quiet meeting room at our office, but the participant’s workspace would also serve well. If those aren’t options, just find a spot somewhere that’s easy to access and distraction-free. After all, this is a guerilla usability test. No mercy.
So, you might wonder, how do we find our victims, er, participants? We prefer to test with at least five people, and we don’t worry too much about recruiting the exact demographic/audience for each feature. It’s best to select participants of varying ages, genders and technical expertise. (You might call this guerilla sampling.) After all, we’re looking for qualitative information here, and it matters more that participants are willing to dig in and give us honest feedback. Once we select the participants, we schedule meeting times.
Running the test
When our participant arrives, we introduce the usability testing process: what we will be doing and that we’ll be recording the session (there’s a disclosure document and all). Then, we read a statement of informed consent that outlines their rights as a participant of the test. To be sure, we’re testing the interface and not them; there are no right or wrong answers. We encourage participants to talk out loud about what they’re thinking during the test to help us better understand the motivations for their interactions with the feature. Next, we explain the context for the feature to help them understand how they arrived at the screen we’re showing.
When they’re ready, we’re ready to roll — it’s time to hit the spacebar to start recording. We go through the task-related questions and give participants time to play around and complete the tasks. This can take anywhere from 15 minutes to almost an hour, but usually no more, and at the end of the tasks, we ask some more general questions about the interface. We want to know their impressions about how it works and looks, and we encourage them to share any other thoughts and ask questions. Finally, we conclude the test.
After all the sessions are completed, it’s time to see if any usability issues emerged (they always do). We export movies of each participant and watch them one at a time while taking copious notes. While we usually notice some fairly obvious issues during the sessions, it isn’t until we watch the recorded sessions that we notice the more subtle ones.
Then, we transfer the notes about the issues that emerged into a spreadsheet (the AK-47 of usability testing). It helps us quantify the similarities and differences between the participants. From there, we prioritize and design solutions where appropriate. Finally, we put all of our findings into a usability report to share with project stakeholders (along with sharing some of the “best of” portions of the sessions videos). It’s fun — even surprising — for stakeholders to see how participants use the features we’ve designed.
The usability testing protocol we use is based mostly on concepts from the excellent book, Observing the User Experience: A Practitioner’s Guide to User Research. A heavy, yet highly recommended read.
I’d love to hear how you plan and implement usability tests at your organization. Feel free to share in the comments below, or ask any questions about usability testing at Emma. We’ll answer your questions in upcoming posts.
Viva la revolution!