OER Discovery Tool Sketch
Organizational Ideas for Discovery Tool in OER Setting
Initial example assumes only keyboard, mouse, and speakers detected with device.
How can the tool communicate with the student?
- Auditory (Verbal)
- Baseline: Can the student hear/interpret audio? Yes/No
- Adjustments: Basic adjustment (3 or 5 setting options)
- Textual (On-screen Text)
- Baseline: Can the student see/interpret text? Yes/No
- Adjustments:
- Language: Short list of relevant options
- Size: Basic Adjustment (3 options)
- Contrast: Basic Adjustment (small number of options)
- Visual (On-screen Images)
- Baseline: Can student see/interpret images Yes/No
- Adjustments:
- Size: Basic Adjustment (3 options)
- Contrast: Basic Adjustment (small number of options)
How can the student communicate with the tool?
- Keyboard
- Baseline: Can student interact with/navigate keyboard? Yes/No
- Adjustments: ?
- Mouse
- Baseline: Can student utilize/control mouse? Yes/No
- Adjustments: ?
Open Questions (at least for me!)
- How to overlap/consolidate these options? In the case of minimal device features, I imagine that these checks and adjustments could be cleverly designed so that many determinations were being made simultaneously. I believe Dana has already uploaded some nice examples of this.
- What is the range of other assistive devices that can be readily recognized by the First Discovery Tool? Which would override other options, and which would go through checking and adjustments in addition to those listed above?
- For the "adjustments" categories above, there are many that I can imagine, but I am not experienced enough to know what might be considered "basic" options - appropriate for a First Discovery Tool?
- Is this enough? Is it enough to just focus on basic determination and adjustment of some baseline input/output modes? This is how I interpreted prior discussions of what the goals of the First Discovery tool were, but I may be misunderstanding.