UI Options Text to Speech Tasking
The following notes were gathered at a meeting discussing the UI Options Text to Speech designs and the technical issues implementing those designs.
--------
Node server changes:
- download: bind link to button
Floating text to speech widget:
- floating, draggable, non-moving when page scrolls
- when kbd focused, arrows move it
- need to determine increments
- movable nature is low priority
==> punt movability - tooltip: timed, fades, is read out
Settings:
- keyboard shortcuts
- inline edit?
- monitor keystrokes
Selection and Play popup:
- popup appears on release of mouse; does not move as mouse drags
- determine what's selected
- look for mouse down, then query dom
- how to determine node? find current active element: will be node
- actual selection could be mid-node
- keyboard?
- look for mouse down, then query dom
- play popup must be in correct dom order
- play popup disappears on activate
- if selection begins mid-word, entire word spoken
==> articulate issues on-list
Highlighting in document (enactor's job)
- parsing by word
- parsing by sentence
- inject spans on words and sentences
- move class around
- we know node, not necessarily word or sentence
- can't use these nodes for tts (would sound disjointed)
- we know node, not necessarily word or sentence
- coordinate with speech
==> no word-level highlighting on first pass
DOM order not necessarily same as desired reading order