Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3
Section
Column
width65%

Prototype Testing

Prototype testing is what most people think of as user testing — you have users perform certain tasks with an early version of a product and observe them to see where they encounter difficulties.

To see the greatest benefit from prototype testing, you should start testing prototypes as early as possible. It's not necessary to have a fully working product or system before you do this sort of test. We will talk about three levels of prototypes that you can use to get user feedback, even before you have a working system: paper, low-fidelity, and high-fidelity.

Paper Prototype Testing

You don't need a computer or a single line of code written to get good user feedback — you can run a very effective test with either hand drawings or wireframes of your user interface on paper. This is known as a "paper prototype". In this sort of test, you ask one member of your team (in addition to the note taker and facilitator) to act as the "computer". When the user "clicks" (points to) the screen, the "computer" then puts the sketch showing the resulting screen down in front of them. If the "computer" does not have that screen ready, he or she can either sketch it on the spot or just use a generic "sorry, that function doesn't exist" screen to guide the user away from that function.

Low-fidelity Prototype Testing

As your design ideas become more codified, you may wish to develop a low-fidelity interactive prototype to test with. This is a prototype quickly put together with tools such as Microsoft PowerPoint (you can create buttons or links to move you between slides, simulating interactions) or Adobe Dreamweaver (you can create static web pages which display the design and have some interactivity).

High-fidelity Prototype Testing

As you come closer to a final design, creating high-fidelity prototypes for testing can be very helpful. This may be an early version of your application that isn't quite complete; for instance it may have some elements hard-coded which will be interactive in the future. As an example, the program may return the same results regardless of what was entered. Or, on a larger project, you may find it helpful to develop an advanced prototype in a tool such as Adobe Flash.

Production System Testing

It is also quite common to do user testing on your production system in order to find usability issues and continue to improve its user experience. In the world of agile development, where small chunks of application functionality are developed each iteration, this is becoming more and more common. As long as user feedback is truly incorporated back in the design and development cycle, this can be a very effective method.

Naturalistic Usability Testing

As you develop higher-fidelity prototypes which include more functionality, or if you would like to evaluate the usability of an existing system, you may wish to have part of your test with each user be a "naturalistic usability test" where you ask the user to essentially determine their own tasks by asking them to just interact as they would normally when using the system. While it's still helpful to have the user thinking aloud during a naturalistic usability test, it's important that you not interrupt their thought process. If you need to ask questions, wait until the end of the task or until there is a natural pause in the task.

Formal Usability Testing

Formal usability testing is usually done in a usability lab, with computers outfitted with screen and keystroke capture software, a video camera to record participant actions, and sometimes even eye-tracking software. The procotols for this type of testing are usually defined much more formally, and may focus more on quantitative measures, such as how long it takes a user to complete a task, rather than looking for more qualitative findings such as what parts of the system users seem to find difficult to understand.

Card Sorting

Card sorting is one of the best ways to figure out how to organize different pieces of information for presentation on a website or in an application. The technique is fairly simple:

Divide all of the information you need to present into fairly small chunks (e.g., "Where to get your Cal 1 Card", "What businesses accept the Cal 1 Card", etc.) and write each chunk on an index card along with a few words of explanation, if necessary. Numbering the back of each card can make it easier to record the results of each test. (Numbering the back avoids influencing the user by the sequence of your numbers.)
Provide additional cards to label the categories that these chunks of information will be sorted into. If you wish, these cards can be entirely blank, or you can provide a few predefined categories. You should always give the user a few blank cards so they can create their own categories, or even missing pieces of information, if necessary.

To conduct the test, ask the user to sort the chunks of information into the categories that make sense to them, thinking aloud as they do so.

Remote User Testing

What: Use screen-sharing software to observe and interview test subjects at a remote location

Use When:

  • You are a developing a component at an institution where you have little or no access to UX resources.
  • You would like to have UX experts from other institutions or locations "sit in" on the test and provide input.
  • You are unable to find suitable test subjects in your immediate area.
  • You have a global user population and you want to include users from different geographic areas in the user testing.

Benefits:

  • Recruiting test subjects is easier, since candidates are not limited by geography.
  • A developer without local access to UX resources maybe be able to recruit UX experts from other institutions to help with testing.

"Remote Online Usability Testing: Why, How, and When to Use It" by Dabney Gough & Holly Phillips provides more information on the subject.

Tools for Remote User Testing

The current implementation of VUlab uses Macromedia Breeze's screen sharing technology and meeting recording tools to create videos of user actions during a user test. Adobe provides a tutorial on using screen sharing in Breeze meeting. (Macromedia Breeze was bought by Adobe and became Adobe Connect.)

Adobe Acrobat Connect Pro is a web conferencing and e-Learning solution. Its screen sharing and recording capabilities also make it a potential solution for remote user testing.

User Testing Facilitation Methods

Although we generally recommend the 'talk aloud' facilitation method for user testing, there are different approaches to user testing with respect to participant interaction.

In the "talk aloud" method, the facilitator asks (and encourages) the participant to describe what they are doing, why they are doing it and any feelings they may be having as they work with the product or system. In this case, it is more important to understand how a participant is experiencing a product or system and why he is taking a particular action than to get a clean measure of how quickly they do it. (Note: in some cases it can still be valuable to collect times and compare them, it just isn't the primary evaluation criterion.)

A/B testing is another user testing method which facilitates the comparison of one version of a product or system versus another. Participants are given a task and asked to complete it as they normally would. They may be asked questions following a task, but are not spoken to during the task. This approach is used when it is important to measure how efficiently users complete tasks, for example, when performing identical tasks on different applications. When doing A/B testing, remember to randomize the order in which users test the designs to negate any "ordering effects" (e.g. preferring the first design because the user immediately adopts that mental model).

It especially important that the facilitator is careful in their interactions with the test participant when using the "talk aloud" protocol. When interacting with users in a session, a faciliator should generally not interrupt the participant, except to remind them to talk out loud when they don't ("Please remember to talk out loud"). Asking them questions during the task further interrupts their workflow (which is already impacted by speaking out loud) and may lead them to "overthink" their actions. Instead it is better to ask clarifying questions after they have completed the task, such as why they did a particular action or felt a particular way. In any event, it is important never to ask leading questions, such as "So what is good/bad about that...". Neutral questioning should be done using phrases such as: "so tell me more why you did that", "why do you feel that way", and so on. Leading questions will lead to biased results from the user.

In general, in user testing it is best to pay more attention to what the user did (performance data) than what they said they liked (preference data).

Column
solid
Panel
borderStyle
borderColor#566b30
bgColor#fff
titleBGColor#D3E3C4
borderStylesolid
titleOn this Page
Table of Contents
toc
maxLevel
5
minLevel2maxLevel5