The year 2020 may be remembered for many things, including a pernicious pandemic and contentious presidential election. But it also marked a turning point in efforts to regulate artificial intelligence (AI) technologies and the systems that embody them. Still, the AI legal landscape remains uncertain, and for stakeholders who develop and use AI systems and need predictability so they can properly manage legal liability risks, understanding what’s likely to come in 2021 is a valuable exercise. In this post, the timing of new regulations is explored, along with predictions on which AI technologies might be targeted for regulation.
Earlier this year, I predicted when we would see new federal regulations specifically targeting artificial intelligence technologies. That was before the pandemic took hold, which continues to devastate the lives of so many to this day and is wreaking havoc on the world’s economy. Even in normal times, making predictions about a legal landscape for highly fluid and disruptive technologies is difficult, but doubly so when it comes to the AI industry.
It’s informative to revisit those factors that I did consider in my earlier prediction, with the benefit of hindsight and addition ground truth facts, to see if 2021 will be the year for major new AI regulations.
First, in January 2020, the White House took a big step toward possible new regulations when it issued its “Guidance for Regulation of Artificial Intelligence Applications” memo to the heads of federal government agencies, by which it sought agency and public input on a general regulatory framework and set of principles applicable to agency-use of AI technologies. That seemed like a positive development at the time, given that it came more than three years after the Obama Administration’s October 2016 plan for Preparing for the Future of Artificial Intelligence. It is unlikely that any action will come of the effort untill after the new year when things settle down after the election.
Second, in February 2020, the European Union (EU) Commission issued its own AI regulatory framework document for EU member countries. Like its earlier General Data Protection Regulation (GDPR), the EU plan for AI appears to reach U.S. companies doing business in the EU (click here to learn more about the EU’s plan and how it could affect U.S. companies). We’ve already seen legislation in the U.S. that includes meausures mimicking aspects of the GDPR. It’s not out of the question that U.S. regulators might mirror some of the EU’s AI planin this country.
Third, in the U.S. National Institute of Standards and Technology (NIST) was charged with working with stakeholders within and outside the government to develop technical and non-technical standards for AI, which could lead to new agency-specific rules that could apply to private AI businesses.
Fourth, state and local lawmakers, responding to constituent concerns, began issuing bans on facial recognition, one of AI’s most controversial technologies.
And finally, the number of legislative bills proposed by members of Congress mentioning AI (and machine learning, specifically) had been increasing, as were the number of Congressional committee hearings on the subject, which together demonstrated Congress’s intent to act.
Then Covid-19 hit and changed everything. Even so, I estimate that 2021 will be the year for federal legislation affecting at least some aspect of AI. My confidence would be improved with a Democratic-controlled White House and Congress, presumably so given the Trump administration’s expressed views about minimizing possible roadblocks (i.e., regulations) that might stymie American innovation in the area of AI and prevent the U.S. from maintaining (or achieving, depending on your viewpoint) AI dominance. Then again, the Biden campaign hasn’t issued its thoughts on what it might do with AI (a point this New York Times article by David McCabe and makes clear), so there’s no telling what may happen even under a Biden administration.
That said, it seems that change is needed on the regulatory front. AI technologies continue to disrupt sectors of the U.S. economy at unprecedented levels, including highly sensitive ones like banking, healthcare, transportation, manufacturing, and legal services. While we’ve seen many positive impacts AI has had on society, at the same time, the speed at which AI has been adopted has also led to significant problems. Concerns over the surveillance collection and use of personal data and biometrics (think facial recognition), the presence of bias in automated decision-making systems (disproportionately impacting minorities, as I discussed here), the inability of even the best staffed AI companies to clearly explain how their AI systems make decisions or take actions, the consolidation of bigdata by a few tech companies, and the rise of nefarious uses of AI such as fake videos, cyber-intrusion, misinformation bots, and certain lethal applications, have led to calls for regulation.
My prediction does not mean that no federal laws and regulations currently exist that could apply to AI.
Private commercial businesses and individuals who make, sell, and/or use AI-based products and services may be subject to, or protected by, one or more of the following laws or regulations that broadly affect hardware/software systems and thus may directly or indirectly apply to an AI system and related activities: consumer protection laws; data and biometric privacy laws; civil rights laws; intellectual property laws, including patent, trademark, and right of privacy publicity laws; labor and employment laws; export control regulations; securities regulations; autonomous systems rules; Federal Acquisition Regulations (FARs); Federal Aviation Regulations (FARs); and safety-related technical requirements (automotive safety, for example).
So, while we wait to see what happens next in Congress and in the AI industry, I’ll be exploring in future posts how lawyers in different practice areas can help AI technology clients navigate the current legal landscape as well as plan for future ones.