Mobile user testing results (Draft 9, Engage 0.3)

These results are for the user testing done against Draft 9.

On this page

I. Summary of results

1. List of issues

Significant design issues are highlighted in green and prefixed with *. (conditions for "significant": issue has a considerable effect on experience and addressing it likely requires substantial additions/changes to the design)

Infrequently: ~ <15% of the time
Sometimes: ~ 15-35% of the time
Frequently: ~ >35% of the time

A. General
  • * Frequently, there was difficulty finding objects from the virtual in the physical, and objects from the physical in the virtual.
  • * Frequently, users spent more time looking at/interacting with the device than the space. This may be due to a variety of reasons, including: a) difficulty in making the physical-virtual connection (e.g., finding objects), b) slow performance of the application, and c) bias from the fact that they were user testing the mobile device. A couple of users did spend more time looking at the space than the device, however.
  • Frequently, the application performance was slow. Fixed in 0.3b3.
  • * Sometimes, users weren't sure what to do with the application/what the application was supposed to do. That is, some users didn't know what they should do or what the purpose of the application was. P: "Does it guide me, or do I guide it?"
  • Sometimes, there was confusion about whether a particular object was in the exhibition or in storage. Users saw objects on the device but couldn't find it in the space, sometimes for catalogue, and especially artifacts under 'Related artifacts.' And if it the object is assumed to be in the exhibition, whether it's near the object currently being observed, or elsewhere in the space.
  • Sometimes, there was confusion about if an object in the physical space was available in the application. Opposite issue of the above.
  • Sometimes, there was confusion about whether something was a "value add" or a digital replicate. For instance, text descriptions of artifacts.
  • * Sometimes, users wanted the device to guide them through the space. Some users wanted the device to tell them where to go first, and what to see next.
  • * Sometimes, users felt that the device was guiding them through the space and wanted a more freeform experience. Some users felt just the opposite of the above. Specifically, they thought that the catalogue screen was meant to be experienced in order (and of those who did, one commented that it felt like a disjointed experience as the device didn't keep the same pace as the space).
  • Sometimes, the keys on the onscreen keyboard were too small to type on. One user wanted a stylus.
  • Frequently, there was some difficulty trying to find symbols on the onscreen keyboard. Specifically, '@' and '.'.
  • Frequently, users commented that they didn't like having to scroll to the top from the bottom of the screen in order to navigate to another page. This was especially true of very long screens, such as the catalogue or artifact view with description extended, and when users wanted to go back a screen or go home.
  • Frequently, SUBMIT was not the first button users tapped on to submit a text entry. Users would either tape DONE (which retracted the keyboard) then SUBMIT; or, RET, then DONE, then SUBMIT.
  • Frequently, users didn't know if the application was still loading a new screen or if they hadn't tapped on an item successfully. Many users tapped several times, though some who were familiar with the iPod touch/iPhone did note the loading spinner in the status bar.
  • Infrequently, users expected that tapping on 'Home' would bring the user back to the initial language selection screen.
B. Comments/guestbook
  • Sometimes, the text in the comment text entry field was too small.
  • Sometimes, it wasn't clear whether comments are privately or publicly displayed. However, most, if not all, users suspected they were public.
  • Sometimes, users expected confirmation that their comment was successfully added.
  • Infrequently, there wasn't clarity whether the guestbook was for a particular object or for the entire exhibition.
  • Frequently, it wasn't clear how to type in a comment. For comment entry, the keyboard did not come up automatically. Users had to tap on the empty box first.
  • Sometimes, there was confusion about what the difference was between Guestbook, Note, and Comment.
C. Exhibitions and its subscreens
  • * Frequently, exhibitions detail screen was used as the virtual 'home'. Users expected that all the mobile experience offerings for the exhibition they were in would be held in the exhibition detail screen (e.g., "Simply Montreal"). Many users went to this screen first, and stayed within this section for a large part of the reality testing.
  • * Frequently, users made an unintended connection between catalogue themes and physical space sections. There was a loose link between catalogue themes and physical sections, but it was not intended that users use this as a way of navigating the space. Surprisingly, some users did find success with this.
  • * Sometimes, the exhibition introduction screen (e.g., "Simply Montreal") was confusing. A few users found this screen confusing, as though they were expecting it to guide them through the space.
  • Infrequently, users didn't know what exhibition they were in. A few users didn't know which exhibition to tap on in the exhibitions screen as they didn't know where they were.
  • Sometimes, users didn't understand why only a few objects were shown under each theme in the catalogue.
  • Sometimes, the 'View all' button in catalogue was difficult to notice. Some users found it later during the session, and noted that it wasn't easily visible (/too small).
  • Infrequently, the 'Switch to grid' icon was mistaken for 'Object code entry'.
D. Object code label/entry
  • * Frequently, object code entry went unnoticed/was not the obvious first choice. Corollary to point on exhibitions detail screen being using as virtual home. Many users had the instinct go to 'Exhibitions' when they first entered the space, and did not notice the 'Object code entry' option.
  • Infrequently, the connection between object code label and object code entry was not made. Some users saw both the object code label and the object code entry option, but did not make a connection between the two.
  • Sometimes, the object code label went unnoticed. Some users didn't even notice the object code label until it was brought to their attention by the moderator.
  • Infrequently, object code label was sometimes misinterpreted. Some users thought it was a button, and tried pressing it. One expected it to read something/show a video, another expected that pressing it would add it to My Collection.
  • Sometimes, users expected an 'Enter' or 'Submit' key for the object code entry. Some users were fine without it, others expected and wanted an 'Enter' button, and one or two thought that the 'Delete' button was the 'Submit' button.
  • * Sometimes, it wasn't clear which object the object code label referred to. Labels were often placed near clusters of objects, and it wasn't clear which one it was referring to.
  • * Sometimes, users would try entering a label number into object code entry. The label numbers were found on tombstone labels that had multiple objects on them, but they were unrelated to the object code number for the application.
  • * Sometimes, having object code entry accessible from anywhere was desired. This is instead of  by the current method, where one needs to go back to it, or visit it from the home screen.
E. Media
  • Infrequently, the expectation upon tapping on an artifact with a media badge is that it plays the media right away. Upon failure of this, the same user expected that tapping the badge itself would play the media right away.
  • Infrequently, there was difficulty adjusting volume on the device. Most users did not attempt to change the volume, but some of the ones that did had difficulty knowing how to do it. Also, during the attempt, many inadvertently changed the orientation of the device.
  • * Sometimes, headphones were wanted for listening to videos. Both because it was hard to hear from the device's speakers even at maximum volume, and because some users felt conscientious about disturbing other visitors/listening to something with other visitors around.
  • Infrequently, users wanted to quit midway through a video and didn't know how to.
  • Sometimes, users wanted to know how to get to the objects shown in the videos. Some users wanted to know if they were in the space, or added value extras.
F. Artifact view
  • Infrequently, there was mild uncertainty about what 'Collect' did. One user thought it enabled purchasing of a print afterward. Almost all (including the aforementioned user) understood it was a way of storing objects for future reference.
  • Infrequently, users noted that the descriptions/narratives were duplicated across platforms. Artifact descriptions were read verbatim in the videos, and extended labels had the same text as artifact descriptions.
  • Sometimes, users felt that the extended artifact description was too much to read.
  • Frequently, users tapped on panel expand/collapse several times because it didn't appear to do anything. The problem was that the expansion occurred below the fold and screen focus did not shift to reveal that.
  • Sometimes, users didn't think to scroll down in artifact view. Some users didn't scroll down in artifact view, thus missing out on the content below the fold.
G. My Collection
  • Frequently, users weren't sure if their they successfully sent their My Collection to themselves. Specifically, there was no feedback after submitting one's email for My Collection.

2. List of keeps/build-upons users explicitly liked

  • Artifact images. Many liked that you could see a digital version of the physical artifact. Many liked it because you could see it closer on the digital version, and sometimes a side of it you couldn't in the physical space. Some noted that it'd be nice to zoom in or get a larger image.
  • Digital tombstone label. Even though it was a partial redundancy with the physical label, some users liked having it on the mobile device, especially when there was additional information.
  • Lists of artifacts.
  • Extended description. This was a bit contentious. Some users really liked the extended descriptions, which provided more than what was on the label and explained what obscure objects were. Others thought it was too much to read, and weren't interested in reading it, but might be interested in listening to it narrated.
  • Video clips. Especially when it wasn't a static image with audio track.
  • Related artifacts. Users liked that there are extras.
  • Comments. Some users found some of the comments humorous.
  • My Collection. Especially the fact that they could send it to their email address, thus avoiding the need to bring a camera around with them in the museum to remember interesting objects. However, one user was strongly opposed to My Collection (or any feature that involved a post-visit experience).

3. List of additional features users explicitly wanted

  • Image zooming. Users wanted to zoom into the details of an image, especially when they couldn't get close enough to an object physically.
  • Map of the museum/exhibit. Especially to locate objects that are on the device, or to tap on parts of the floor and get the objects that are located there.
  • Artifact search. Typing in part of the name/description of an artifact, and getting a list of relevant artifacts. This was especially desired when in front of an artifact that didn't have an object code (or when they didn't detect the object code label).
  • At the beginning of the session, a tutorial/guide on how to use the application/what to use it for.
  • List of digital "nuggets". A number of users wanted a full list (on the device) of all things digital that they could look at for the exhibition on the device, whether it be audio, video, images, extra information, stories, etc.
  • A way of narrowing down lists of artifacts. For instance, if user knows that the artifact they're looking for is a painting, some way of narrowing the list down to paintings.
  • More narrative behind an artifact. Instead of more technical information.
  • Something more visual, less textual. Some users noted being anxious to see something more visual/exciting, and less textual.
  • Note that says whether you're allowed to touch/interact with specific artifacts. Some artifacts/interactives are meant to be touched, many others are not.
  • Removal of one-media, two-tap redundancy. All/most artifacts only had one piece of media--it would've been better to play it after one tap instead of requiring two taps (one to expand the panel, the other to actually play).
  • Location-aware browsing. P: "When you walk around, maybe it can show on the screen what the possibilities are, an image of what you can see from your position, instead of thinking, 'Where's the object code? Is there an object code?', and instead of saying, 'Well I saw number 8 and 10, but where were the others?'"
  • Showing nearby objects on the device.
  • A link to the McCord website.
  • A way of communicating directly to museum staff. E.g., to leave a comment, report a problem, express interest in donating an object, etc.
  • Jumping directly to the device's analogue of a physical section of the museum. Instead of having to scroll through/fish around the options.
  • Presenting of artifacts through a taxonomic structure.
  • Recommendations feature.
  • In-museum experience, not out-of-museum experience. One user was strongly opposed to the 'Collect' feature because of its out-of-museum experience use, and suggested replacing it with more in-museum experience.
  • Interactive games.
  • More contextual information. E.g., what artists inspired other artists.
  • Have the textual content read out. As opposed to reading it on the screen.
  • Make it like a human tour guide.
  • More "bonus" content. That is, content that's not available without the device.
  • Before-and-after images of artifacts. E.g., before-and-after restoration.
  • Different views of artifacts. Especially hidden views (e.g., inside pockets, underneath, etc.).
  • Captioned video. Especially since volume wasn't loud enough on the device.

4. List of miscellaneous notes/observations

  • When a user makes a link between digital and physical, that bond corroborates that the user is on the right track. From the raw notes: Made a link between a section on the catalogue and the physical space because P saw an object in the space that she saw on the device just before / P: "Now that I know where I am, feeling a little less lost" [only needed a single object to feel that comfort]
  • While watching a video, users tended to spend roughly equal amount of time looking at the video and at the actual object.

II. Detailed user testing notes and results

You can download a PDF version of the results, which is better formatted.

Engage 0.3b2 (not the high performance version)

Mobile user testing results, Participant 1 (Engage 0.3)
Mobile user testing results, Participant 2 (Engage 0.3)
Mobile user testing results, Participant 3 (Engage 0.3)
Mobile user testing results, Participant 4 (Engage 0.3)
Mobile user testing results, Participant 5 (Engage 0.3)
Mobile user testing results, Participant 6 (Engage 0.3)
Mobile user testing results, Participant 7 (Engage 0.3)
Mobile user testing results, Participant 8 (Engage 0.3)