The ODD (Optimizing Diversity with Disability) initiative is a part of the IDRC's WeCount project. A central aim of ODD is to investigate bias in widely deployed hiring algorithms and to suggest more inclusive alternatives.

Automated algorithms supporting the infrastructure of society

These automated hiring and matching algorithms, implemented by major corporations such as LinkedIn, Amazon, and others can be positioned in the wider context of automated processes, that use machine learning/AI algorithms, and support the infrastructure of society. These systems inevitably result in inequitable outcomes, as largely unintended consequences both of:

It's also arguable there is a further source of bias since the system is deployed in the context of capitalistic utility functions that tend to favour accruing benefits privately, socialising losses. This results in bias since the community facing the losses (the public at large) is inevitably more diverse and less legible than the community accruing the benefits (the corporation deploying the system). There are numerous other sources of bias, both conscious, and unconscious, which are widely described elsewhere.

Risks of Automated Hiring Systems

The above considerations are common to the deployment of any kind of machine learning systems as part of the infrastructure of society. Turning to hiring systems more specifically, it is becoming widely recognised that, as recorded in a recent Harvard Business School report, the architecture of these hiring systems is leading to inequitable outcomes, some of which can be accounted for by the generic considerations above. Workers who do not fit the algorithm's self-reinforcing profile of those who are appropriate for work are being excluded from participation. Inevitably, these are a more diverse, heterogeneous group than those who fit the profile. Let us imagine that those impacted will want to mount an investigation of one of these systems, so that they can better understand who is being marginalised and how the system can be improved to be more inclusive.

These are some responses arguing against such an investigation, and offering an evaluation of their risks and costs:

All of these considerations tend towards requiring massively greater transparency in such systems — in accordance with WeCount's aims towards creating inclusive data ecosystems. Systems should not only be much more accountable for the data which they require from citizens, but also for the use they make of such data. It should be possible to trace the use of one's disclosed data, the purposes to which it has been put and, if necessary, to revoke its use by the corporation. In the particular context of hiring systems, it should be possible to discover how personal data has been schematized by the system, as well as to trace how differently schematized individuals will be treated differently by the system. Without this, it is extremely likely that inequitable and harmful outcomes will result, often in ways that are opaque to all of those involved.

The Lifecycle Leading to Society's Acceptance of Technology

We accept the loose standards applied to the deployment of machine learning (and more widely, software engineering) systems because they are part of an immature industry whose nature and impacts on society are still poorly understood. It is clear from the history of many previous such immature industries that it is inevitable that society gains the expertise, the will, and the capability to effectively oversee and regulate these industries. The open questions are: What is the trajectory of society establishing this oversight, how quickly does it proceed, and who acts to accelerate or impede the process?

A example that is very familiar from the history of technology is the evolution of safety regulations governing the operation of steamboats in the 19th century in the US. In 1817, the boilers of these steamboats were a cutting-edge technology, with strong incentives to compromise on safety standards by fabricating them with dangerously thin steel, running them overpressure and above safe speeds. The steamboat owners insisted that it would be impossible to comply with such onerous safety standards and still run their businesses at a profit. They effectively lobbied for the prevention of regulations, and when regulations were first enacted in 1838, supervised their operation internally rather than participating in external audits by disinterested inspectors. This argumentation continued for decades, resulting in thousands of deaths, until effective legislation was finally introduced to regulate the industry in 1852 — from then onwards it was found perfectly possible to run the industry both profitably and safely.

For those involved in these processes, the question to be asked is: What is their part in it to be? Should they be those acting to bring in effective oversight of such innovative technologies by competent, independent authorities in order to ensure fair outcomes for society as quickly as possible, or should they be those acting to ensure that profits can be extracted from the unsafe, inequitable practices as long as possible, perhaps stalling them by the promise of self-regulation?

Activities for the ODD Project

As we begin conversations with ODD project stakeholders, we intend to gather answers to all these questions, and gain insights into further important qualities for those who deploy automated hiring processes, in their aim to avoid damaging consequences to their clients. We will assemble datasets from both real and synthetic profiles of postings and individuals in order to suggest improvements to these hiring algorithms in line with "fair machine learning" metrics, as well as suggesting wholesale alternatives to these algorithms based on recent research on exploratory, rather than optimizing approaches.