Privacy Needs and Preferences Brainstorming Aug 25 2016
1. What are the possible privacy preferences someone might have?
Setting levels of access
Physical space
Privacy settings depend on where you are / environment
Considerations in physical space applying to digital space
Exceptions - overriding all else
“Low-friction” easy way to do this - e.g. switching on “do not disturb”
Based on temporary circumstance
Qualifiers:
Time limits
Locations
Nature of physical space
Specific people/companies/sites
Groups of trust
Purpose/goal/task - specific
Who is using the information?
App-specific
Threshold for value exchange
What am I willing to give up to get/accomplish what I want/need → User-determined value
Bundling - forced to give up information without granular choices
(from UX perspective, bundling can help to see the bigger picture/impact of sharing information)
Understanding technical limitations / motivations (what will it be used for?) → helps with trust
2. Personal information that someone might want to protect
Location
Anonymous log-in names (e.g. to video conferences)
detection of information - device name, serial number, op system, location, end-point ID, MAC address
attention/consciousness (e.g. push notifications, amber alerts)
Limiting interruptions
Protected resources
Ways of communicating
Use-cases:
Accessing services on the internet
setting/storing/applying GPII preferences
Social networks
Mobile device
Location
mic/camera input
etc
3. How is private information being used currently?
Create a list of misuses of information on the wiki? - what we want to avoid/protect against
have a community meeting about senior's fraud?
what constitutes private information?
what is being gathered and for what purpose?
what are the conflicts between privacy requirements and what is actually being done today?
What can practically be done?
What can elicit further understanding - e.g. value, threshold value
reframing/highlighting what’s going on
are there constraints in fulfilling privacy wishes?
Who are vulnerable in this context?
Other Notes:
Map out the complex problems and where we can intervene?
Different ways of packaging/presenting it
Information-driven
Service-driven
CSS inheritance / weighting system - exception classes ?
Work done with AARP, CARP? (http://www.aarp.org/ http://www.carp.ca/)- mapped of groups of trust, officials who access your info, etc
Creating a trust map
from intimate to public
from high worth to disposable
from high risk to no risk
Personal information
Include read and write access?
E.g. Facebook - others having permission to post on your wall/timeline
Apps that invite your friends (spam)
Privacy firewall
All or nothing
Ask me first
Inferred information - services that exchange data can infer more information about you by sharing their information - inference engines
Increased risk to user
Often used in senior’s fraud - (organise a community meeting about this?)