If the EU Commission’s newly proposed harmonized rules on Artificial Intelligence (the “Artificial Intelligence Act”) (published April 21, 2021) are adopted, U.S.-based AI companies operating in European Union countries (or expecting to do so) may soon be subject to significant new regulatory requirements. The proposed regulations, with few exceptions, would apply to companies or individuals (“providers”) who place on the market or put into service certain high-risk AI systems in the EU, “users” (including companies) of those AI systems who are located in the EU, and providers and users of such AI systems that are located outside the EU but whose system outputs are used in the EU. If the timeline of the EU’s General Data Protection Regulation (GDPR) is any indication, it may take many months before the proposed AI regulations are adopted and become effective. Even so, U.S.-based AI companies who may be subject to the regulations would do well to use this time to map out a framework for achieving compliance.

The proposed regulations define “AI system” as software that is developed with one or more of machine learning techniques, logic- and knowledge-based techniques, statistical methods, Bayesian estimations, or search and optimization methods, and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. This is a broad definition that is expected to impact many data-based businesses.

The proposed regulations generally do not apply to low- or medium-risk AI systems, nor to specific AI technologies and systems that are explicitly prohibited from operating in the EU after the effective date (such systems are listed in Article 5 of the regulation; more on this in subsequent posts). The rest–so called “high risk” AI systems–will need to comply with the rules (including existing systems if they are modified after the effective date).  Example high-risk AI systems include certain medical devices, biometric identification systems, education or vocational training systems, law enforcement surveillance systems, and AI systems intended to be used as safety components of a product, among several others.

Regulated companies and individuals will need to notify designated EU regulatory bodies before their systems are put on the market or used; establish internal risk management and quality management systems; comply with certain data management requirements (related to ensuring data quality and representativeness); prepare extensive technical documentation for their AI systems (including demonstrating compliance); maintain certain records during system use (including logs of incidences); and conduct conformity assessments before use (demonstrating compliance with applicable existing EU laws). Among other requirements, regulated companies and individuals will also have to design their AI systems to meet certain accuracy, robustness, transparency, and cybersecurity standards; enable their outputs to be interpretable by users; and ensure human-in-the-loop capabilities during use.

Like the GDPR, the AI regulations provide for significant penalties for rule violations, including administrative fines. For example, deploying a prohibited AI system or not complying with certain data requirements could result in fines up to 30 million EUR or 6% of a company’s “turnover” (revenue). Other specific violations could result in fines up to 20 million EUR or 4% of a company’s revenue (lesser violations would be limited to 10 million EUR or 2% of annual turnover).

Assuming the regulations actually adopted are similar to the proposed regulations, U.S. companies likely to be regulated should begin thinking about forming a complimentary team drawn from its management, legal, and technology groups to plan out a course of compliance, starting with addressing requirements needing the longest lead times. For some companies, this may include making structural changes to their risk, quality, and incident response management plans, and changing product development procedures to ensure AI systems meet interpretability and human oversight requirements. Conformity assessments may need to be incorporated into product development life cycles rather than conducting them at the end when a product is ready to launch. Some companies already planning to make significant changes to existing AI systems (their design or intended purpose) should consider accelerating their plans so the changes are complete before the regulations’ effective date. Some of these pre-planning efforts will not only help companies smooth the transition to the EU’s future AI regulatory landscape, but may also position companies to more easily react to possible new U.S.-specific AI regulations that some lawmakers have proposed.

The post Proposed New EU AI Regulations: A Pre-Planning Guide for U.S. In-House Counsel first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.