Gamification

See below for Y3 notes.

 

The term 'gamification' appears to currently have two meanings: using levels, badges, ratings, or other game-like rewards to motivate people to perform otherwise non-game activities; and using game interface features such as avatars and props instead of more conventional desktop controls such as buttons and checkboxes.

We have discussed the value of using game-like features in First Discovery as a way of motivating and engaging users, breaking away from a 'clinical evaluation' model, and generating buzz.

I’ve put together a rudimentary game for collecting needs and preferences called ‘CanQuest’.

Go to http://canquest.inclusive.com/play.html. You’ll see a game board with an avatar and 3 computer workstations at different locations.

In a different window or device (so you can see 2 screens simultaneously), open http://canquest.inclusive.com/update.php. This is a target page with some sample text in a box.

You can move the avatar around with the arrow keys. When he encounters one of the computers, you get a pop-up asking for your preference, with 3 choices. You navigate the choices by up and down arrow, and select with the Enter key. The 3 computers control 3 different visual interface characteristics: font size, text/background color pairs, and line length and height.

When you make a selection the target page is adjusted accordingly. 

When you’ve made a selection for all 3 characteristics, you get a “Level 1 complete” message, with a (non-functioning) button for Level 2.

This prototype is meant to explore 4 ideas: 

  1. Engaging experience. Well, it’s not the most fun game you’ve ever seen, but it may be more fun than a form. All kinds of enhancements are possible: personalized avatars, game boards that resemble the actual use environment, etc.
  2. Levels. We have discussed ways of getting some basic settings down first, letting users return to refine. That’s a familiar game structure.
  3. Modularity. A game like this could be a sub-set of a bigger game, with different rooms for different categories of preferences; the games could be swapped out based on what we know about the user and/or the use environment.
  4. Separation of control from outcome. I don’t remember discussing this, but I think there’s some advantage to this separation. Some users may be reluctant to explore an interface directly atop the content they’re using. Maybe there’s research about this? I know I sometimes have trouble undoing the low vision OS setting because the interface suddenly looks so strange.


Here's another example of a fun experience that opens up the world of interface choices: Discovery Cats!


Y3 Additions to concept

I attended the Global Game Jam event in Philadelphia Jan 29-31, 2016. It was one of hundreds of local events, in 93 countries, where 36,000 developers/designers created 6800 games in 48 hours. A very receptive group with many fresh eyes. Their FAQ (scroll down) includes notes on accessibility. Their kickoff keynote began with mentions of cochlear implants and artificial retinas. (Trying to locate the video).

Also, I encountered a good theoretical book on game motivations.

Anyway, here's a proposed 'big picture' framework for badges/levels gamification:

  1. First Discovery (FD) and First Explore (FE)
    1. FD is the first level; players get badges for completing each section. They have to complete all sections to level up. This will encourage people to try things in dimensions they don't think they have a need or preference. They can do sections in any order (maybe not for everyone?).
    2. Every feature or setting they select becomes a tool they own, not a burden, limitation, or expense – this flips the script on 'accommodation'. If they later encounter a website in which that tool is ineffective, the stigma is more on the website, somewhat on the tool, but much less on the user. See 'Tournament Mode' below.
    3. They can 'go deeper' into a dimension at any point; there's no barrier between FD and FE. This could be visualized as an FD task being a simple interaction screen set on the exterior wall of a building; the FE experience requires the user to go into the building, behind the FD screen, where there are gears, cogs, etc. to adjust. Or a control room located above the FD playing field.
    4. At any point they can enter 'Tournament Mode', where their current settings are applied to content, for ratification ('preview' in 2/2016 tool). The content can be actual sites (or apps?) the person uses, or ones suggested by us or a third party. There could be a way to record both performance (whatever that is) and user satisfaction. Players get points for each tournament, and a dashboard that updates somehow.
  2. Social effects
    1. Players can join (or are automatically subscribed to?) communities of other players who use the same tools. This lets them exchange tips and tricks, etc. This can also be the channel through which new players enter with a copy of a peer's prefs.
    2. Spotter: players in a community who report a mismatch between that community's tool and a particular site/app/situation. Players get points for spotting, and points for upvoting existing spots. Maybe when the votes are high enough the issue becomes a sort of petition and gets wider circulation.
    3. Solver: players in a community who have solved the mismatch somehow. They get points, and other players get points for upvoting solutions.
    4. Bounties: mismatches that can only be solved by ICT owner/provider (e.g., website's developer). Players can collectively offer a bounty using their points? 'We' contact the owner/provider when the bounty rises above a certain threshold? There should be a way to reward the bounty offerers AND the owners who fix the problem. Return double the bounty points, and give the owner-fixer some kind of fame payoff?
  3. Expertise and assistance
    1. Players can register as experts in specific features, settings, or products. Or maybe they gain guru points by being upvoted? This gives them some kind of role responsibility, like a forum admin. Maybe they can earn a 30 minute phone call with an ICT owner/provider or a policymaker? I know people who would move heaven and earth to get that.
    2. Players might include assistants, both expert and non-expert. They should still do FD/FE, and they have to earn the expert status. 
    3. Maybe we can offer continuing ed. units? Some way for externally recognized experts (e.g., OTs) to justify their work, and a path to recognition/career for everyone else.

Clearly, many of these ideas are out of scope for PGA, if not totally grandiose and unfeasible. But of course none of this has to be implemented only on a PGA platform, when there are many large mainstream social-game platforms out there already; this could be a sub-community.

Most clearly of all, implementations of these ideas would need to be very aware of the different kinds of players. Age, technological ability/confidence, actual level of interest, etc. – people have to be able to participate freely and easily.

"Sit in the blue chair."

Denis Anson told us the following story:

"A friend of mine told me about his medical intake to the Navy.  He said, 'I walked into a room.  There was a desk, with a medical person sitting behind it.  In front of the desk were a red chair and a blue chair.  Without looking up, the person said, ‘Sit in the blue chair.’  I did.  He looked up, and said ‘You passed.’”  The test showed that he could hear, that he could see and could see color, and that he could follow orders."

Also, that he could sit down.

Although this story pushes our "don't test the user, test the technology" button, the points stand: a single task can embed several dimensions of interactivity, and inference is both powerful and easy on the user.

Our First Discovery tool designs to date (2/13/2016) have worked with one dimension at a time, in a particular order, going through each task only once. A gamified tool might combine dimensions in tasks that users could perform in a more self-directed order, and could use recombinations to triangulate on results. For example, there could be 3 text entry tasks: one for copying sample text, one for 'free' typing (whatever the user wants to type), and one for entering an answer to a puzzle. All 3 would collect low-level typing data for inference, while the other dimensions might provide input on cognitive processing and dexterity.