User Test Example - the Fluid Lightbox

Planning & Executing a User Test of the Fluid Lightbox

To understand how a user test would be planned and executed, it may be helpful to look at a fictional example. Let's take the Fluid Lightbox component as an example and pretend for discussion's sake that it has not been created yet, and that there are no designs. The high-level steps for planning and executing a user test of this component may include:

  1. Determine which persona(s) or roles represent the users of the new component.
  2. Devise method for locating people who match the personas or roles to participate in user testing of the component.
  3. Create wireframes of design(s) for proposed new component.
  4. Create a scenario to give the user test believable context.
  5. Create tasks which reflect the tasks which most users will perform most often. Alternatively, if there are areas that have already been identified through user feedback as problem areas in existing components, create tasks that will involve the parts of the design(s) that encompass "pain points."
  6. Create the user test protocol for test administrator (person who runs the test) to follow, and questionnaires and questions to ask the user.
  7. Practice running the test (dry run/pilot test).
  8. Find a location for the user test.
  9. Recruit users to participate in the user testing session.
  10. Run the user test sessions (with 3 - 9 users).
  11. Put the results into report and discuss with larger team.

Detailed Process Description

Now let's take a look at this scenario in a little more detail. In our example, several designers discuss and realize the need for the new component. Next, someone draws designs on a white board and discussion follows. At the end of the discussion, the team is not sure if the design will work or not. There's no code yet so how can they decide? In this case, the team creates wireframes for the design (or even just a paper drawing) to be used in a user testing session.

After creating wireframes, the team creates a scenario for the user test, to have some context for the user. In this case, maybe a student would receive a collection of artwork from a Post-modern Art History class and have to re-arrange the images in a meaningful way for his/her own use.

Then, the team would create one or more tasks for the user to complete so that the design is tested for overall usability, or, if there are particular areas of concern, to identify or explore "pain points." So in the case of the Lightbox component, one task might be to have the user move a specific image from one location in the collection to another to see if they understand how to reorder the collection.

Once the scenario and tasks have been created, the next step is to create the administrator's protocol (the questions and instructions the person running the test will say to the user) as well as any questionnaires the team may want the user to complete at the end of the session.

The next step would be to complete a pilot test with another colleague (someone not working on the component) in order to make sure the test protocol is running smoothly. If there are any problems found with the tasks or scenario then this can be fixed before the real users are involved. In this example, the design is shown to the user as a "paper prototype" where a test administrator will move the paper pieces around as if the user clicked on a link or button.

After finding a location to run the user test as well as the participants, each user works through the scenario and tasks individually. A rating scale and several qualitative questions may be administered after the session. Most user tests involve about 3-9 people. It can be a good idea to have an odd number of participants to avoid having results that could end in a tie.

One person usually will run the user test and another is a dedicated note-taker. In the case of paper prototype testing you may want to have another facilitator who acts as the "computer," moving the pieces of the paper prototype around in response to the participant's actions. This is particularly useful in cases where you cannot videotape the sessions. The person running the user test has to interact with the user and so may miss some observations. (Many videotaped user sessions are never watched as transcribing the video session is very labour-intensive). Even if there are multiple people facilitating the user test (e.g. one person who talks to the user about the tasks, one who note-takes, and one who acts as the computer), they should all work from the same protocol so they give the same instructions to users.