PGA Co-Design Meeting Notes: 2014-12-19

Webex

https://iskme.webex.com/iskme/j.php?MTID=mbbe8e80baa0f470134159faed93f25f8

Attendees

  • Anastasia Cheetham
  • Kate Katz
  • Kathy McCoy
  • Jess Mitchell
  • Emily Moore
  • Madeleine Rothberg
  • Sepideh Shahi
  • Gregg Vanderheiden
  • Amy VanDeVelde
  • John Willis

Agenda

  1. Quick review of platform discussions (Anastasia)
  2. Scope
  3. Latest design mock-ups (Dana)

Proposed Decisions

Develop as a web app for Chrome using Web Speech API, test on whatever browsers support this API.

Next Steps

  • Gregg will create GoogleDoc for capturing decisions/resolutions, their rationales and any counterpoints
  • Kate will schedule an extra call to discuss scope issues

Notes

NIDRR feedback: Need at least one end-to-end prototype, in at least one application setting.

Contract calls for at least one end-to-end functional, working example of the tool in at least one of the application settings in a prototype format. Will have separate conversation to talk about scope, reconsider the requirements document based on NIDRR's feedback.

Gregg sees two dimensions of "scope": 1) which application area, and 2) what does and doesn't need to be in the wireframes.

Platform discussion review: JavaScript web application, use Web Speech API, test on Chrome and Safari. Do we need to double-check that it works on Safari? It claims to... Let's say "Test on Chrome and any other browsers that support it." (this has been recorded in "decisions" document)

Designs:

Gregg reviews his concerns, as articulated in his email. One concern: control over tts; people with reading disabilities prefer to have control over what it reads and when.

Jess: "turn talking off" is NOT early in the process essentially because it's not as important as the things that come before it. It's on to start, so anyone who needs it has it; people who don't need it aren't 'harmed' by it being on, but they may need to have some other issues sorted out urgently.

Regarding volume: devices have volume controls, so we don't really need to have a volume setting in the tool, so maybe this might be one of the things we leave out of the scope for this round.

Gregg: But how will the user know how to adjust the volume? If newbie, no idea! Maybe put volume adjustment right up front - almost first screen.

Emily: opinions diverge based on context, we should capture these divergences. e.g. in a classroom, volume early seems important. Note the differences, so that now we can pick a route and go, but to remember, in the months to come, where there were differences of importance in different contexts.

Jess: Some of the decisions about "what comes first" will be a vehicle for us to have further discussions about the application-setting specific implementations. i.e. some of the "what comes first" decisions will be implementation-level decisions.

The "do you need help" screen would appear if there was a delay in response to the questions, or if there were a number of inappropriate keystrokes (e.g. letters instead of arrows)

Screen layout: more dots than icons because each "topic" covered by an icon there might be more than one screen (e.g. for volume, there's "speech on/off" and "volume", for text size there's "minimum" and "comfortable")

Text size: designs show two possible ways of approaching it: two-screen (min and comfortable) or one-screen. We can test this.

Note: The Undo control only appears when necessary and only applies to the current screen.

Final screen = Language: Note difference from first screen. First screen is "welcome," last screen has icons and progress indicator. Idea is if you return to language controls later, it's no longer a welcome screen.

Suggestion: for size, text vs images (as the example) might be an application-setting specific difference, e.g. text good for OER, image for other. Needs SME input and user testing.

Jess's three pivots: 1) need to start development as soon as possible 2) need to make hard, clear decisions about scope 3) making those decisions will help flesh out application-setting specific details and what are we going to do about them. These three things form a triangle that is struggling, pushing and twisting. We need to address all three simultaneously.

Kate: also need to specify what are the criteria for determining which application setting we focus on

Emily's comments, which capture what we should do as a community: We're coming from different places, trying to work on one thing; different side ideas come up. Maybe instead of discussing "what is the linear progression for this one tool?" instead think of "what are these different preference chunks, and what's important for different application settings? What would it look like to ask for these particular preferences, individually (regardless of the order of presentation)" i.e. design the slow keys interface, regardless of who would use it.

Jess: acknowledging that this is really multiple tools – different tools in different settings – how to we move forward to make the decisions that we do need to make, and how to we articulate how someone might make decisions about what needs to be in a tool in a particular context?

Instead of trying to just make a decision, we should be doing user testing.