Automatic adjustment of metadata based on usage

Background

The Floe metadata tools are based on the AccessForAll approach, which emphasizes personalization by providing support for systems of transformable, flexible resources that each meet different user needs (Neville and Treviranus 215). Floe has developed automated and author-refinable metadata authoring tools (covered in D202.2) that can assist content authors in the creation of metadata to support personalization. This metadata will be useful to matchmaking systems and search engines in order to help them deliver the appropriate content for a given individual, tailored to their needs and preferences. In order to help improve the quality and appropriateness of the matching process—and to provide users with the ability to provide feedback to content authors and to request alternative content—feedback tools are required. Such feedback can also help spur content improvement and the creation of more diverse alternatives based on user demand. In some cases, for example, content authors may not be aware of how suitable their resources are for users with different needs, or alternative formats may not be available. The best judge of whether or not a resource actually meets the needs it claims to meet is the user who has those needs.

Feedback Tools

The feedback component of the Floe metadata tools will improve the overall accessibility of content by allowing users to help in the refinement of that content and its metadata. These tools consist of freely available, open-source JavaScript modules that can be incorporated into content delivery interfaces.

The goal of this document is to provide an overview of the requirements and design considerations that informed the creation of these tools, as well as to provide examples and links to source code and demonstrations. The approach we have taken is to design holistically, defining the full range of features and user interactions that will support a robust system for user feedback. In order to prove and test these designs, a more limited proof of concept implementation—nonetheless based on high-quality and scalable code and polished user interfaces—has been produced and will continue to be iterated on beyond the scope of this project.

The Metadata Feedback Tool
Figure 1 - The Metadata Feedback Tool

Figure 1 above shows the current user interface designs for the Metadata Feedback Tool, which is displayed along the top of the page. This user interface is intended to be embedded within a content delivery system or content management system, providing unobtrusive and ubiquitous access to user preferences customization and the feedback tool. It includes icons indicating the status of the current content as it relates to the user’s preferences. For example, if the content contains alternatives that the user has requested, the icons will indicate this.

Activating the “+” symbol at the left side of the tool will display the quick preferences editor (see Figure 2 below). This editor will allow the user to adjust his or her preferences, independent of the current content, with only a few clicks.

Quick preferences editor
Figure 2 – Quick preferences editor

Summary of Features

In order to meet the goals of both 1) improving the performance of automated matchmaking algorithms and 2) encouraging the creation of new content alternatives and formats,  the feedback tool is designed to provide users with the ability to:

  • improve metadata,
  • ask for help with content that doesn’t work for them,
  • contribute additional alternative content, and
  • rate the accuracy and appropriateness of a match in order to improve the matching algorithms.

Each of these is described in more detail below.

Improving metadata

In cases where metadata is either automatically generated or manually created, errors will inevitably occur. For example, a content author may make mistakes in the creation of metadata by providing incorrect information or neglecting to specify information. This may be compounded by a lack of expert knowledge regarding accessibility, access features, and the needs of individuals with disabilities. As a result, a user who requests content to meet certain criteria (e.g. that it provides textual alternatives to audio) will sometimes encounter content that doesn’t meet those criteria.

The feedback tool will allow the user to improve metadata by modifying it to make it more accurate. The feedback tool will present users with icons indicating which preferences are met by the current resource based on the current metadata, and will allow users to a) correct incorrect information and b) add missing information.

For example, Figure 3 below shows a dialog that will be displayed if the user clicks on the “transcripts” icon. The dotted green line around the icon indicates that the transcripts for this resource only partially meet the user’s preferences. The dialog provides more specific information and allows the user to edit the transcripts or the information about the transcripts to improve it.

Interface to edit transcripts
Figure 3 - Interface to edit transcripts

If the user chooses to edit the transcripts and metadata, they will be presented with the dialog shown in Figure 4 below. This dialog gives the user the opportunity to edit the transcripts to correct mistakes or add missing information. This represents a form of “crowd-sourced” feedback and correction, where the workload of improving alternative formats such as captions or transcripts can be distributed amongst a community of contributors (Bigham et al. 3). This approach is particularly effective in the context of open access repositories where users are not just consumers but also participants who share a sense of responsibility or stewardship over the content.

Interface for editing transcripts
Figure 4 - Interface for editing transcripts

The feedback tool is being built with a plug-in architecture that will allow integrators to configure how the metadata corrections will be handled. A plug-in will be created to submit metadata corrections to the Learning Registry (http://learningregistry.org), but integrators will be able to create their own plugins to process the metadata in whatever way is most appropriate for their content delivery systems.

Asking for help

Content authors will not always have the time, resources, or foresight to create a broad range of alternative formats for the content they author. As a result, users will eventually encounter content that they wish to access but that is not in a format that is accessible to them.

The feedback tool will allow users submit a request for assistance with a resource and be notified when adaptations become available. Figure 5 below shows the dialog that will be displayed when the user clicks on the “Requests” icon in the Feedback Tool.

Interface showing existing alternative content requests
Figure 5 - Interface showing existing alternative content requests

The dialog will show the user a list of existing alternative content requests, including the number of people who have expressed interest in the alternative content. If the list already includes the user’s desired alternative content, the user will be able to a) “vote” for the adaptation (indicating that they, too, are interested) and b) add themselves to the “notification” list for any of them (see Figure 6).

User prompted for email after voting for a request
Figure 6 - User prompted for email after voting for a request

If the user clicks the “add request” button at the bottom of the dialog, they will be presented with the dialog shown in Figure 7 below. This dialog will allow the user to specify what form of alternative content they would like. Based on their selection, they will be presented with another dialog, shown in Figure 8, which will allow the user to provide specific information about their request and to (optionally) provide an email address if they wish to be notified when their requested adaptation becomes available.

Once again, this approach takes advantage of the power of user contributions; in this case, the ability for users to identify gaps and specify what they think is most important. Rather than being passive consumers of web content, they are able to help influence and contribute to it.

Interface allowing user to request alternative content
Figure 7 - Interface allowing user to request alternative content

Interface allowing user to request transcripts for a video
Figure 8 - Interface allowing user to request transcripts for a video

As described above, the feedback tool is being built with a plug-in architecture that will allow integrators to configure how requests for adaptations will be handled. In future versions, a plug-in will be created to submit requests to Prosperity4All’s assistance on demand and document transformation infrastructure. Integrators will also be able to create their own plugins to process requests in whatever way is most appropriate for their content delivery systems.

Contributing alternatives

As described above, some users will request help in making a resource more accessible to them. The feedback tool will also allow users to respond to these requests by contributing alternative content.

The same interface that allows users to view existing requests for adaptations will include the option to fulfill a request. This may involve uploading a file, typing information into the interface, recording an audio file, or other actions, depending on the nature of the requested resource and the capabilities of the user responding to the request. The feedback tool will coordinate between the repository interface plugin and the assistance, crowdsourcing, and transformation plugins to register the new content in the repository and email users who requested notification of the new alternatives.

For example, if a user has asked for a description of an image in a particular language, someone who speaks that language might create an audio recording of himself or herself describing the image. If a student requests captions for a video, another student in the same course will be able to create the captions and make them available for everyone accessing the video.

Improving the Matching Algorithms

In addition to allowing users to directly refine metadata, the feedback tool will also allow users to provide general feedback regarding the suitability of the resource for their personal learning needs and preferences. This feedback will be used to automatically derive usage metrics, or paradata: a form of metadata that records how, and in what context, a learning resource is used (Cheetham 11).

For example, if a user has indicated a preference for audio descriptions of any visual content, and they are provided with visual content that does not have audio descriptions, they will be able to click on the “sad face” icon in the Feedback Tool. They will be presented with the dialog shown in Figure 9, below. This dialog will allow them to indicate what was lacking or wrong in the resource selected. The intention with the happy/sad icons is to provide users with a simple, approachable user interface that does not require a lot of technical knowledge to use, but also allows them to provide further details if they can.

Dialog when user indicates content does not meet expectations
Figure 9 - Dialog when user indicates content does not meet expectations

Users will also be able to indicate that a resource did successfully meet their needs and preferences, as show in in Figure 10.

Feedback when user indicates that content meets expectations
Figure 10 - Feedback when user indicates that content meets expectations

The paradata generated from this form of feedback will be provided to the resource producer using an extended version of the Paradata Schema developed by the Learning Registry (Rehak). A matching algorithm may include this machine-readable paradata in its analysis of whether or not a given resource meets the needs and preferences of the requesting user. This analysis will take into account the stated preferences of the users who provide feedback. A matching algorithm can, for example, compare the feedback provided by users with similar preferences to the requesting user to provide additional information about the suitability of the resource for the requesting user. In this way, the paradata acts as a form of verification of the original metadata. For example, if the metadata states that a resource contains text alternatives to audio, but users who require this alternative provide feedback that they could not use the resource, the matching algorithm would adjust to stop providing the resource to users who have this requirement.

Demonstration and Source Code

A working prototype of the Metadata Feedback components has been implemented using HTML, JavaScript and CSS. As mentioned above, these components are implemented in a modular fashion so that they can ultimately be integrated into a variety of content delivery systems. This prototype make extensive use of many aspects of the GPII architecture and related family of tools, including the GPII preferences framework and Fluid Infusion.

A demonstration of these components is available on the web:

http://metadata.floeproject.org/demos/feedback/index.html

The source code for the  Metadata Feedback components is available under an open source license (dual licensed under both the New BSD License and the Educational Community License 2.0). The source code was developed in conformance with the GPII Technical Standards (Clark and Basman), including a robust suite of unit tests and comprehensive code review by the community.  The source code is available on Github:

https://github.com/fluid-project/metadata

Conclusion

The Floe feedback tools will allow users to play an active role in improving the accessibility of resources by contributing metadata corrections, alternative resources and direct feedback on the suitability of the resource for their individual needs. The feedback will be used to automatically derive usage metrics that will assist in the adjustment of metadata, improving resource matches for all users.

As these technologies are integrated into content creation and delivery tools, more resource will be available with appropriate metadata and paradata. This will result in more users having access to content in the form they require.

References

Bigham, Jeffrey P., Richard E. Ladner, and Yevgen Borodin. “The design of human-powered access technology” in The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility. (pp. 3-10). ACM, 2011.

Cheetham, Anastasia et. al. “Accessible Metadata Generation” in Universal Access in Human-Computer Interaction. Design for All and Accessibility Practice 8514 (pp. 101-110). Springer International Publishing, 2014.

Clark, Colin and Antranig Basman. GPII Technical Standards. Global Public Inclusive Infrastructure Wiki, 2014. http://wiki.gpii.net/w/GPII_Technical_Standards

Neville, Liddy and Jutta Treviranus. “Interoperability for Individual Learner Centred Accessibility for Web-Based Educational Systems” in Educational Technology & Society 9 (4) (pp. 215-227). International Forum of Educational Technology & Society, 2006.

Rehak, Daniel R. Learning Registry Paradata Specification 1.0. Learning Registry Metadata Initiative, 2011. https://docs.google.com/document/d/1IrOYXd3S0FUwNozaEG5tM7Ki4_AZPrBn-pbyVUz-Bh0/edit?pli=1