As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias.

The campaign to root out bias–or eliminate biased systems altogether–has been amplified in recent weeks in the wake of reports of a black Michigan man who was apparently arrested solely on the misidentification decision of a facial recognition system used by law enforcement.  Criminal charges against the man were dropped by police only after they discovered their error, but by then the harm had already been done.

Creating fair and equitable data-based decision systems, void of conscious and unconscious bias that causes disparate impacts especially on racial minorities, is a critical task and part of a much larger national anti-discrimination priority.  The imperative for federal and state lawmakers, government agencies, and the companies that develop, deploy, and operate artificial intelligence technologies is to pass meaningful laws and adopt governance strategies that ensure no biased data-based system has a role in American society.

Data May be the New Gold, But it Tarnishes Easily

As the name suggests, data-based systems are powered by data, which some in the tech community have analogized to gold, given how much big datasets reveal about the world, including human behavior and interactions, that can be monetized.  When fed into machine learning algorithms, data create knowledge systems touted for their ability to augment or replace complex human mental endeavors, such as the ability to make important decisions.  Today, data-based systems are found in nearly every sector of the economy and segment of society.

But those same systems can also perpetuate inequality and injustice and cause harm, especially to persons of color, ethnic minorities, and other marginalized people, when data reflect bias.  Bias appears, for example, when data are generated in the course of people having certain characteristics or features being treated more harshly or just differently than other similarly-situated people that do not have those same characteristics and features.  Historically, this sort of unequal treatment has been observed–and recorded in data–in areas such as banking, business, education, employment, entertainment, housing, law, and voting, among others.  A developer who is unaware or indifferent to the progeny of data can unconsciously create biased systems simply by choosing a bad dataset.

In the case of facial recognition, the technology’s multi-step computational process involves use of deep neural networks trained to “learn” facial features by evaluating large image datasets containing face information. Problems arise when the distribution of faces in a dataset is skewed toward one particular demographic or when face images are improperly labelled using terms and phrases having racial, ethnic, gender or other stereotypes, or misogynistic and demeaning captions. If white male faces make up a majority of faces in a dataset, a facial recognition algorithm based on that dataset will unsurprisingly perform better when given new white male face images to process compared to, for example, those showing persons with darker skin tones.

Finding bias in data, and the systems built on that data, is one of the biggest challenges facing those fighting the problem.  The task is expensive, arduous, and technically challenging because bias is deeply baked into models.  Often, bias may not be apparent without testing and transparency, which some developers are reluctant to provide. Practically speaking, this means reformers may have to observe system outputs over time to see if they exhibit bias.

An even greater challenge going forward is the sheer amount of data being created.  It is estimated that, two years from now, there will be more new information generated for developers to use than there has been from the very first byte of data created to the present day.  With all that new data, developers will find enticing opportunities to create new revenue-generating data-based systems.  At the same time, market pressures may cause some to side-step anti-bias efforts, at least absent enforceable internal company anti-discrimination guidelines and accountability measures.

Facial Recognition is a Cautionary Tale

As notorious as facial recognition has become, its notoriety has an upside: increased awareness of the wider structural bias problem potentially affecting many data-based systems currently in use.  As U.S. lawmakers recently acknowledged, “research shows that algorithmic bias and discrimination exist on online housing platforms and in lending that uses artificial intelligence for advertising and decision-making purposes.”  H. Res. 946 (May 1, 2020).  Bias can also arise in decision systems that governments use to allocate financial assistance and other limited services to the public they serve.  Bias has been documented in systems used by courts to decide eligibility for parole following arrest, and in systems law enforcement uses to decide where to deploy police in a city.  In other fields, bias can arise in decision systems used in social media advertising, employee hiring, banking platforms, and biometric applications.

Like other inequality and injustice issues in America, the existence of bias in data-based systems, and its disproportional adverse impacts especially on Blacks, is not a new issue.  In 2016, the Obama administration acknowledged the problem:

“[Artificial intelligence] also has the potential to improve aspects of the criminal justice system, including crime reporting, policing, bail, sentencing, and parole decisions. The Administration is exploring how AI can responsibly benefit current initiatives such as Data Driven Justice and the Police Data Initiative that seek to provide law enforcement and the public with data that can better inform decision-making in the criminal justice system, while also taking care to minimize the possibility that AI might introduce bias or inaccuracies due to deficiencies in the available data.”  — White House: Preparing for the Future of Artificial Intelligence (Oct. 2016)

More recently, some state and local governments have responded to the backlash over facial recognition by banning or limiting its use by law enforcement and others, though some point to privacy and surveillance overreach as the primary reasons for those actions. Even so, banning or limiting one technology does not address the wider systemic bias problem known to affect other data-based systems, most of which see far less media attention than facial recognition.

The Calls for Artificial Intelligence Regulations Get Louder

Spurred by protests, negative media attention, and employee activism, as well as the current uncertainty surrounding legal liability and damages facing those operating in the artificial intelligence sector, some of the top technology companies have petitioned Congress and the White House to regulate artificial intelligence.

Federal lawmakers’ responses, however, while seemingly well-intentioned, fall short of what is actually needed. Bills like H.R.6216 (the “National Artificial Intelligence Initiative Act of 2020”; Mar. 12, 2020) would provide funding for research and education related to “methods to assess, characterize, and reduce bias in datasets and artificial intelligence systems,” but workable “methods” have already been thoroughly vetted by the likes of AI Now (2019), Future of Life (2017), and artificial intelligence thought leaders at some of the tech companies calling for regulations.  More research and education would certainly help better understand the scope of the inherent bias problem, but they do little to block existing discriminatory systems and applications from being deployed and used today.

Other legislative proposals seem to take a more targeted approach, like H.R.2202 (the “Growing Artificial Intelligence Through Research Act”; Apr. 10, 2019), which purports to “strengthen research, development, demonstration, and application in the fields of artificial intelligence science and technology by…identifying and minimizing inappropriate bias in data sets, algorithms, and other aspects of artificial intelligence.”  Then again, the bill focuses more on research than providing a definitive framework for solving the bias problem or holding violators accountable.

Still other measures, like S.3284 (the “Ethical Use of Facial Recognition Act”; Feb. 12, 2020) and most recently H.R.7120 (the “George Floyd Justice in Policing Act”; Jun. 25, 2020), go beyond education and research in favor of limited bans on technology, at least until meaningful solutions can be found.  But even proposals to ban problematic technologies are likely to end up bogged down in Congressional committees and go nowhere, a fate most earlier proposals have seen. Congress needs to avoid partisanship and the distractions of the pandemic and upcoming election and take meaningful action now to stop bias in data-based systems.

As an initial step, Congress should listen to technologists and pass laws containing milestones leading to the development of technical and non-technical standards concerning fairness, accountability, transparency, and preventing bias and discrimination in artificial intelligence system outputs, consistent with existing civil rights and anti-discrimination laws.  Standards and criteria will be needed by both regulators and those they regulate to assess levels of compliance with anti-bias measures, whether those assessments are risk-based, based on objective testing, or use some other investigatory approach.

Lawmakers should also pass legislation consistent with goals of the Federal Data Strategy’s Data Ethics Framework (2020 Action Plan), expanded to encompass private sector datasets.  This could include appropriating funds for the formation of data repositories full of good non-proprietary datasets, updated regularly to reflect a diverse world, and made available to the public.  Absent a national plan for data, a handful of large technology companies will likely continue to create and control large datasets and, in turn, dominate the artificial intelligence industry (which is an altogether different problem needing Congress’s attention).

Lawmakers should also look to state legislatures for examples of other types of guardrails. Washington State, for example, the first state to curb use of facial recognition technology by law enforcement, requires its public agencies to independently test for “accuracy and unfair performance differences” when a decision system relies on skin tone, gender, age and certain other characteristic features. Similar tests could be developed for data-based decision systems that use other features, including demographic, behavioral, and geographic characteristics that historically have been used to discriminate.

As a backstop, lawmakers could also provide a private right of action in the event unregulated biased systems enter commerce and cause harm.  Without a means for aggrieved persons to sue for damages, any law aimed at reducing structural bias in data-based systems could be weakened from the start. The alternative to a private right of action is codified statutory damages or civil penalties, but to be effective as deterrents, the amounts of damages or civil fines must be sufficiently large to tilt a company’s cost-benefit analysis toward compliance.

Finally, lawmakers can legislate protections for whistleblowers who identify developers or agencies that ignore anti-bias/anti-discrimination laws and intentionally rely on biased outputs or permit biased systems to enter the stream of commerce.

For its part, the White House should heed the words of its National Security Commission on Artificial Intelligence (NSCAI), which acknowledged, at least in the context of national security, the presence of bias in data-based systems (Q1 Recommendations; March 2020), as well as the voices of the majority of those who responded to the Office of Management and Budget’s (OMB) Draft “Guidance for Regulation of Artificial Intelligence Applications” Memorandum for the Heads of Executive Departments and Agencies (January 13, 2020) and expressed at least some level of support for the use of fairness and non-discrimination as guiding principles federal agencies should consider when acquiring or developing artificial intelligence systems.

Notably, OMB’s Memorandum does not appear to eliminate or even “minimize the possibility” of biased outputs caused by inherently-biased data.  Rather, OMB suggests agencies consider, in accordance with law, “issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.”  Under that framework, an agency could, absent other guidance, consider fairness and non-discrimination in data-based system outputs while permitting use of a biased data-based decision system so long as its output is in some measure less discriminatory than existing or alternative biased decision systems (including subjective human decision-making).  While it may be technically challenging to eliminate or even minimize the possibility of bias in data-based system outputs, permitting slightly less biased outcomes because they are marginally better than existing or alternatives ones does little to break down systemic bias and replace it with fairness and equality.

Which makes the National Institutes of Standards and Technology’s (NIST) efforts to research and develop technical standards for artificial intelligence all the more important. As required by Executive Order 13859 (“Maintaining American Leadership in Artificial Intelligence”, February 11, 2019), NIST is leading a public-involved effort to assess how technical standards could be used in evaluating artificial intelligence technologies, including the bias in data-based systems. With the pandemic, however, NIST’s first public workshop focused on the bias issue is not scheduled until August 2020. Thus, concrete recommendations from the working group may not be forthcoming until after the election in November.

A Path Forward

Until NIST publishes technical and non-technical standards, the White House issues new regulations, and Congress and states pass appropriate and targeted legislation, the burden of eliminating (or at least minimizing the potential for) bias in data-based systems will fall mostly on the shoulders of the companies who make and deploy the systems. Legal uncertainty around artificial intelligence should be enough to drive those companies toward action, which may include employing a diverse group of domain experts in ethics and law embedded with system designers to help develop socially-responsible and ethical governance approaches that advance anti-bias/anti-discrimination principles. Some technology companies have already taken meaningful steps in that direction and many others need to follow their lead.

Lawyers especially, both in-house as well as those in private practice, will be crucial in helping craft appropriate measures for those companies. Compliance with laws and regulations will need to consider the data, feature selection, use cases, interfaces, and other aspects of data-based systems to ensure fair and equal treatment of those impacted by the systems. Lawyers will also be instrumental in helping vet third-party data-based software-as-a-service systems to ensure those systems meet company anti-bias/anti-discrimination standards before outputs from those systems are used to drive company decisions (enforced through specific terms of use and license agreements). Lawyers can also help establish company incident response plans and risk assessment procedures specific to artificial intelligence data-based decision systems.

Many have been moved by the tumultuous events involving police brutality disproportionately targeting Blacks, and many are demanding action on that and the broader problem of structural racism and discrimination in America and elsewhere. Doing nothing is unacceptable, especially now when Americans by a large margin believe racism is a significant problem and things in this country need to change. While it may seem like bias in data-based systems is not a significant part of the larger structural discrimination problem, no part of the problem can be ignored. If demonstrable changes are not made in small areas, trust will begin to be eroded more broadly, the consequence of which is continued structural bias and, in the case of artificial intelligence, a failure to fully achieve the benefits advocates for the technology have long promised.

The post Eliminating Structural Bias in Data-Based Technologies: A Path Forward first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.