PhET, at the University of Colorado, provides free, open interactive physics simulations (http://phet.colorado.edu/). While these simulations (or "sims") are effective and engaging for the student, some have barriers which some learners need to overcome in order to access and learn.
Many simulations require effective hand control, visual sightedness, or cognitive capacity to interact with simulations - so how do we re-design or adapt these physics simulations so they include the broad range of learners and their preferences? What does it mean for a PhET sim to be inclusive?
To improve the inclusiveness of PhET simulations, keyboard interaction and non-visual interaction were identified as areas to pursue. Initially these two ways of interacting with a simulation were thought to be wholly different modes and therefore distinct design considerations for each. But as we discovered through the Forces & Motion: Basics use case, these two interactions were very much related and required a single unified design.
The term "keyboard interaction" is used broadly in this case to describe any input method that uses buttons as the primary mode of interaction, whether it be a traditional QWERTY keyboard, a single-switch input, a sip-and-puff controller, or an on-screen keyboard. These devices require a different method of navigating and interacting than a "point-and-click" style interaction like a mouse or gesture input.
In general when dealing with keyboard accessibility:
All critical controls should be keyboard navigable and focusable.
Navigation structure and tab order should be logical and predictable.
Interactions should be natural and efficient.
Additionally for PhET sims:
It was decided that disabled controls should be keyboard focusable to aid non-visual learners using screen readers.
Because there could be a large number of interactive components, interactive items should be logically grouped so the user can navigate quickly around the interface jumping from group to group, and then choose to dive into a group if they prefer. For example, a media player may have the pause, play, and stop buttons in one group, and the captions, volume, and other preferences in another group.
"Tab order" for simulations does not need to follow the traditional "top-down" tab order of web content.
A non-visual interaction needed to be created from scratch for the Forces and Motion Basics simulation. An initial attempt was made to create a 1-to-1 translation of the mouse interaction for screen-reader use, but it became obvious that there needed to be more contextualization and scene setting.
To accomplish this:
Added scenery description at the start of the sim, update the description as the game progresses, and consider adding a hotkey which reads back the description on demand.
Add and update descriptions of moving parts and interactive elements.
Focusable items were grouped giving the user semantic landmarks which associated text descriptions and labels.
We did not segregate visual and non-visual into separate interactions since non-sighted users may want visuals (low vision) and sighted users may want additional descriptiveness offered by text-descriptions.
Wireframe for Forces and Motion Basics
Tab key moves focus from group to group.
Selecting the Enter key on a group should put focus on the first interesting interactive item in the group.
Selecting Left or Right arrow keys moves focus between interactive items within a group. Pressing Enter on an interactive item should activate it.
Slideshow of keyboard and non-visual interaction mockup: