PGA Co-Design Meeting Notes: 2014-10-15

Process questions

Things that we might need in the future:

  • Details about how to create a comment on wiki

  • Should we use WebEx in the future, or something that won’t require any installation?

  • NOTE: Having comments in-line is actually easier than using the comment widget is actually easier for screen readers

  • Versioning history is avialable on the wiki page, so there is no need to create new versions of working documents (yay!)

 

Meeting agenda

  1. How to talk about designs

  2. Talk about the sketches

  3. Revisit Co-design process

 

How to talk about designs

  • This process is iterative - we will have small successes and build on them

  • Try to avoid referring to the designer, this is all collaborative work

  • Focus on function and intent rather than specifics (things like color)

  • Ask Questions

  • Explore the questions together

  • Be generous and generative :)

  • There are no bad ideas

  • There may be few submissions during any phase

 

Note on Project Timeline - Next deliverables

    • Create a list of requirement (11/15/2014) - Already underway

    • Create common features (2/15/2015) - Where we are now

Sketches

  1. Sketch one: Discovery tool for Open Educational Resources

    1. System starts at level of font size or volume, user could then make further adjustments

    2. Thinking of a particular hardware in this sketch? a keyboard maybe

      1. There are places where keys are pressed, this would be very different interaction on a touch screen

    3. This sketch does a nice job of highlighting potential barriers:

      1. devise user is on

      2. input methods

      3. perception questions

      4. getting in the door - doing something that is good enough vs. doing something that requires fine tuning

      5. technology may be intimidating

        • Offer alternatives - allow person to find assistance another way

        • This a moment for the users to ask for help

    4. This sketch is trying to define “Minimal detectable output”  

      1. In an ideal situation, what’s the smallest that they can see, and what is the quietest that they could  - allow them to start from there, rather than starting with standard default 12 point font.

        • Do we want to use a new term for this? - For now, as we are just sketching, simple term like “Setting minimal font and volume” will suffice

    5. Potential issues

      1. User might stop at a size that is ok, but not best for them

        • should we ask can you hear audio or is this audio good?

        • We want the question to be intelligible enough that we that we catch them not just at their minimum but at level suited for optimum perspective (understanding)

        • Identifying worst case solutions, could be used to identify the high end of the scale.

          1. For example: User has a certain setting that he likes his screenreader to be set at, but if there is construction outside, this minimum setting changes

          2. But that would be a performance issue - the highest volume would be the level above which it becomes intolerable

      2. Could we identify a “typical environment”

        • Have them try to the interaction in a typical environment

        • Probably easier to determine typical environment for higher ed use case than with K-12

        • Testing options vs. asking directly

          1. Design choice that this brings up - some preferences seem wellsuite to be presented as tests, other preferences best presented as a direct question

          2. Can we asking about the current environment, make note of their preferences within it

          3. One test example: Drag the column of the box until the font is more

          4. *Understanding not just perceiving* - change it until you like it.

          5. Likely to get to a better result if we get users to a real context, they are actually reading a story (for example)

          6. *This is first discovery, so we want to get in the ball park so that we can communicate with the user*

          7. What do we mean by *Critical task* - something that you are doing in context or do we provide a critical task?

          8. Need to be sure that the assessment tool is gather information applicable to the critical task that the person is going to performing with the device

          9. Give them, not so much as an assessment, but set it where it’s comfortable for you. -  This is creating a baseline for you.

          10. Reading story task seems like a good approach for now, because it helps us get the user, in a realistic way, into the “perception ballpark”

          11. Volume is a painful thing rather than an inconvenience (volume v font size), allow the user to specify that volume will be first setting adjusted - each time there is a new session, volume should be the first option for me to adjust

  2. Sketch two

    1. Similarities to sketch one:

      1. Trying to start with an interactive approach, and a process of elimination

      2. Allowing different modes of input to start, and eliminate those as the use

      3. From there being prompted to refine those preferences

BIG QUESTIONS

    1. Are the assessments that we are presenting accurately preparing user for critical tasks

    2. Which preferences lend themselves to tests and which to direct questions?

    3. How do capture information about environmental factors?

    4. How can we find setting which is

Next Steps

    1. Pull out some of the questions that Dana highlighted.

    2. Sketches will continue - we are working out functionality

    3. Continue contributing comments on the wiki and via email