Privacy Preferences Design: Risk and Security Assessment
Introduction
In the current technical climate, systems that collect and use personal information generally take an "all-or-nothing" approach to personal information. Individuals have the choice to either accept a service provider's privacy policy in its entirety, or not use their service at all. This encourages the overly broad collection of personal information by service providers, and limits many users' access to convenient or necessary services. Moreover, most privacy agreements are riddled with technical complexity, lack of details about how personal information will actually be stored and used, and legalese—all of which makes them difficult to understand, consent to, or contest individually. Where users accept such broad and non-personalized privacy policies, they are put at risk of having their personal information used in ways they are not aware of. The goal of this project is to drastically reduce this personal risk and to create new opportunities for adaptable service delivery that uniquely meets an individual's desires by allowing for the creation and application of individualized privacy policies that are personally expressed and understood by the consumer. Without the privacy preferences strategy that this project has designed, the biggest risk is to individuals who are most vulnerable to the misuse of their private information. For example, if a person's health information is made available to a prospective employer, it might affect the outcome of a job application.
Security is of utmost importance when dealing with privacy. Without comprehensive security, there is no possibility of maintaining personal privacy. During the co-design phases of this project, technical risk and security assessments were conducted. These assessments included engaging technical security experts in analyzing the designed tools for possible security issues, while keeping the Personal Information Protection and Electronic Documents Act (PIPEDA) in mind. Strategies to mitigate risk were formed and explored. The results and strategies from the risk and security assessments informed the co-design process to ensure that the developed tools are able to meet the highest standards for security.
In order to evaluate the risks associated with the Privacy Preferences Design, the team broke them down into three categories, individuated by who or what is at risk. These are:
1. Risks to the user,
2. Risks to the company or service collecting and using private data,
3. Risks to the privacy preference initiative
These three areas of risk are discussed below, followed by an assessment of the security issues associated with personal privacy preferences, and a discussion of the techniques that will help mitigate these risks.
Risks to Users
General Privacy Risk
Use of the internet involves providing personal information to an organization or company in order to use a service they provide. An example is submitting one's credit card information to make an online purchase. This is practical and convenient, but there is an associated risk that the credit card specifics will be intercepted and used fraudulently by some other individual.
Other examples of risks due to exposure of personal information include:
- Selling personal information to third parties for aggregate data collection
- Access to resources through stolen user credentials:
- Bank accounts
- Health records
- Identity theft, e.g., impersonating users in sent emails or instant messages
- Bullying and stalking
- Use of a person's location information to facilitate physical theft
- Sharing of data that was assumed to be private with a wider audience than was considered when the data was created (private chats, images, movies)
The convenience of using online web services must be balanced by mitigating the risks to privacy. Persons with disabilities, persons who are aging and others who face discrimination, stereotyping, marginalization or exclusion have the most to gain from smart services that respond to personal data, but are also the most vulnerable to the misuse of private information (e.g. denial of insurance, jobs or services, fraud, etc).
Lack of Awareness of Privacy Risk
When personal privacy risks are not well understood, users are less likely to seek out and take advantage of optional privacy protections. As a result, some or all of their personal information may be exposed to unwanted parties. Further, users may avoid the use of certain services altogether out of fear of exposing their personal information (e.g. online banking), or may agree to privacy policies without fully understanding their implications.
The goal of privacy preferences design is to demystify privacy and help users feel comfortable and secure in making informed choices about their personal privacy. This is a core concept of Privacy by Design - that privacy should be user-centric and within users' control. The Privacy Storybuilding tool aims to mitigate these user risks through a playful and engaging approach.
The storybuilding tool employs elements of online gaming through the use of hypothetical situations, story development and a non-threatening, informal tone. It guides the user through a series of dialogues and engages the user to create a story about their privacy needs. It invites users to actively understand what privacy is, and encourages them to author their own individual privacy policy. It includes features such as an introductory video, tutorials and/or "learn more" options that users can consult at their own pace. This background material helps clarify different privacy issues to users.
The design of the tool was the outcome of brainstorming and co-design sessions with stakeholders. Usability testing will provide feedback as to whether or not the tool achieves the goal of mitigating user risk, and is proposed in the future work plan for this project.
Services not Supporting Personal Privacy Policy
Another risk to the user stems from limitations that a company or service may have in terms of fully implementing a user's personal privacy preferences. In this case, even though the user has declared preferences for how their privacy is to be respected, the service may only be capable or willing to match a subset of them. This risk can be mitigated by having the service inform the user of which parts of their personal privacy preferences are being followed and which are not. The user then has an opportunity to decide if they wish to proceed or, instead, reject use of that service. See the Policies section of "PIPEDA and Privacy Preferences Design", which summarizes an organization's legal obligation to users regarding collecting, using, and disclosing personal information.
Risks to Companies
The risk to companies relates to legal implications. Typically, companies have a single privacy policy that covers all users. Using the Privacy Preferences Design framework implies multiple policies, one per user. That represents a greater degree of legal risk – handling multiple privacy policies – than a single policy does. On the other hand, users that have assurance that their privacy preferences will be respected has the outcome that the company and its services are attractive to more users.
There is also a risk of mismatches—situations where an individual's personal privacy preferences can't be fully accommodated by the service provider's in a way that is both usable and sustainable from a business perspective. What happens when a service provider can't fully meet the needs and preferences stated by the user? What if a service provider's business model is predicated on using personal information in a way that may not fully meet an individual's abstract privacy preferences, but which nonetheless the individual may find the service valuable enough to accept? Or if the user experience of a service may suffer significantly if a user's privacy policy is implemented exactly as specified?
The implementation of privacy preference setting tools must include a variety of ways to support users and service providers in negotiating specific compromises in cases where service functionality may be partially or entirely limited by the user's privacy preferences. For example, the system could provide a dialog (with the necessary information provided by the service) that informs the user that some essential features of the service will not be available if their privacy preferences are to be met. Ideally, the service would provide options to the user, such as alternate features that can be used, or the option to apply a temporary exception to their preferences, allowing the service to access the necessary information only while the service is in use (or while certain features of the service are in use).
Part of this negotiation would include the provision of ephemeral or time-limited exceptions to a privacy agreement. This helps to ensure that access to additional personal information will no longer be available and/or will be erased after a period of time or upon quitting the service. For example, if a user has declared a general preference for blocking location tracking, and then attempts to use a mapping service that requires location tracking, the user could be given the option to only allow this time, or to only allow when the service is in use. Further suggestions for designing appropriately reciprocal user interfaces, which help to mitigate the user experience and business risks of mismatches, are included in the Feasibility report for the project.
Risks to the Privacy Preferences Initiative
Three risks to the project are:
- Companies/services not making use of privacy preferences,
- Changes to privacy laws or specs and potential conflicts with the project,
- Project failure. Implication is no privacy preferences for individuals and status quo for users' privacy.
One risk is that there is no uptake by companies or services. When a company or organization does not implement Privacy Preferences Design, a user's privacy preferences are not respected even though the user has declared their preferences. As a result, users might decide to not use the company's service, or limit their use of the service to situations where their preferences are respected.
This risk is mitigated by working with companies and advocating the benefits of using the Privacy Preferences Design strategy. Benefits include improvement of the company's brand and attractiveness to customers, and compliance with privacy laws. The cost of implementing privacy preferences is relatively low compared to the cost of failing to protect users' privacy. Users are more likely to become customers if assured that their privacy is protected. Protection improves users' confidence in a company's services. In addition, companies are subject to privacy laws, such as PIPEDA and the European Union's GDPR, in order to conduct business. These laws reflect the fact that privacy is a current and popular concern of users and governments. Implementing Privacy Preferences Design is one way for a company or organization to meet these legal requirements.
Secondly, the global regulatory frameworks for privacy are complex and in a state of flux. The main differences between European and North American privacy laws are that European laws are central and stronger, imposing regulations on companies. In contrast, US laws are a mixture of federal and state legislation, and prefer that companies self-regulate. Canadian legislation is somewhere in the middle, inclined towards the strength of European law, while permitting self-regulation. Privacy preferences design represents a way to cross-cut these differences. Putting privacy into the hands of users to provide companies with users' own individual privacy policies entails that privacy laws will be met regardless of geographic region, provided companies respect users' privacy preferences. The only requirement is that the privacy preferences are capable of capturing and reflecting the strongest legislation, those of the European Union.
Privacy preferences design is a complex space, and as a result there are several ways in which the project could fail. The project has produced designs and prototypes for privacy tools, however, the actual tools have yet to be built. The tools must be developed in such a way that the user experience is simple and presents an improvement for service providers to the current approach; otherwise the barriers to respecting and responding to the privacy preferences of consumers could be too great for companies to overcome. Another vulnerable aspect of the project is the bidirectional relationship between individuals and service providers. It is likely that many service providers will require incentive to adopting this strategy beyond solely meeting customer desires. Standardization will help with incentive; however, it is likely that policy and legislation will also be required—making sure there are motivations to respect privacy and to support an individualized model.
From the perspective of the end user, the final tools and workflows must be friction-free, enabling them to easily set and understand their preferences and use the services they require. Continuing with an ongoing co-design process, usability testing and broad stakeholder consultations should minimize this risk.
Finally, the possibility of security breaches as described in the security assessment section below are a concern. The strategies for addressing these security issues are outlined there. Providing guidance in implementing these strategies through the creation of a comprehensive resource which utilizes existing documents and tutorials on securing systems should help to lower this risk considerably.
Security Assessment
As noted above, introducing systems that gather, present, and transactionalize personal privacy preferences and the uses of personal information opens the risk of increased exposure of personal information. A user's privacy preferences, in a number of ways, do in fact constitute information about the user – personal information – and can be used to identify aspects of online use and track the user's online presence. Even when not directly connected with directly identifying information such as a user account or email addresses, personal privacy policies can be used to "fingerprint" or infer a user's identity. Anonymous information, when aggregated with other data, can often be de-anonymized. If the preferences are somehow traced back to the individual, even more sensitive personal information could be discovered such as their credit card, phone number, health information, and so on. As a kind of personal information, users' privacy preferences are subject to PIPEDA, and to legal requirements in terms of collection, use, and disclosure. See the policy relationship between PIPEDA and Privacy Preferences Design described in "PIPEDA and Privacy Preferences Design".
Nonetheless, there is a notable aspect of privacy preferences that needs to be addressed. They are a single source that are to be applied across many services and contexts. As such they are meant to be shared. At the same time, the personal information they contain must be shielded. Portions of a user's personal privacy preferences may be contextualized to a particular service provider or even a particular transaction. In these cases, such information should be used only by the service that the user has granted access to, and not shared by that service to third parties unless the user allows sharing.
To avoid exposure of the preferences, the architectural plan and implementation of Privacy Preferences Design needs to consider and address the following risks, which are dependent on the implementation approach, software architecture, and technologies used:
- Compromise of the server or databases that store a user's privacy preferences
- Eavesdropping on communication of privacy preferences ("man in the middle" attacks, where unsecured communications can be intercepted by a third party)
- Access to preferences when stored on a user's device (including physical access such as theft or virtual access such as via malware)
- Impersonation of a privacy-preferences consuming website or websites that are trusted by the user
- Impersonation of the user, such as stolen user credentials (username and password and so on)
- Risks associated with specific data that are stored in the privacy preferences (such as list of trusted websites)
These risks can be addressed using standard security measures:
- Encrypt preferences using symmetric or public-key encryption algorithms with large keys, such as AES or RSA
- Secure transmission of personal information; e.g. HTTP over TLS(also known as HTTPS)
- Use multi-factor authentication such as PIN and passphrase or Yubikey
- Use access tokens to grant access to personal information in a way that the lifetime can be controlled and revoked when compromised
- Avoid unencrypted storage of privacy policies on users' devices, since these may be lost or stolen
- Avoid storage on shared devices such as public workstations
- Protect servers that store user privacy policies using firewalls, intrusion detection systems, and auditable logging/monitoring systems
- Follow standards for creating secured applications, such as the Open Web Application Security Project's top 10 security risks
Risk of Improper Deployment
There is a risk with respect to using the Privacy Preference Design software itself. Here, the risks are twofold:
- Software implementation flaws could result in compromised personal information, such as: unprotected APIs, lack of authentication and authorization, vulnerable code that is exposed to SQL injection, XSS (Cross-site scripting) or other attacks. To mitigate this issue, thoughtful design and implementation of the software is required, and attention must be paid to properly escaping and validating user input throughout the system.
- Another risk results from using the software where it is not deployed properly. An example is the use of unsecured data transmission; specifically, in the open web, when HTTP is used for transmitting sensitive data instead of HTTPS, or, the server is not properly protected behind firewall. To mitigate this issue, the software producer needs to provide instructions of how the software can be properly deployed. In addition, the personnel should follow instructions and deploy the software properly, and employee access to personal information stored in databases or server logs must be limited.
Conclusion
In this document, we have outlined a number of risks inherent in designing and implementing systems that support the personal expression of privacy policies by individuals, and the challenges to service providers in matching those preferences. Several strategies for mitigating these risks, such as providing user interfaces and channels for negotiating exceptions to personal privacy policies have been discussed. Security has been identified as a crucial foundational requirement for protecting both personal information and the transaction of personal preferences policies, and we have outlined a set of design, software architecture, and technical strategies that may help to provide a firm basis for the security of such personalized systems. Ongoing research is necessary to maintain and update a workable ecosystem of policy guidelines, reusable design patterns, user interface components, and infrastructural services that will support this vision. Continuous vigilance to unexpected risks is required to maintain the viability of the privacy preference compliance enabled through a personal privacy preference tool.