On February 19, 2020, the European Union Commission issued a plan for regulating high-risk artificial intelligence (AI) technologies developed or deployed in the EU. Calling it a “White Paper on Artificial Intelligence: a European Approach to Excellence and Trust,” the plan was published along with a companion “European Strategy for Data” and follows an earlier “AI Strategy” (2018) and AI-specific ethical guidelines (April 2019).

In addition to presenting a framework for regulating “AI applications” in the EU, the Commission’s plan focuses on creating and organizing an ecosystem, encouraging cooperation among member states and institutions, creating infrastructure changes, and providing for investment in AI.  It also includes a special focus on data generation efforts related to the development and use of “European AI” in EU member countries.

Although the regulatory framework is not yet enforceable (the EU is accepting public comments on its plan through May 19, 2020), the plan’s proposals are a big step toward issuance of targeted AI-specific regulations that predictably will impact businesses both within and outside the EU, just as the EU’s General Data Protection Regulation (GDPR) impacted US-based companies following the GDPR’s implementation in 2018.

Ursula von der Leyen“Today we are presenting our ambition to shape Europe’s digital future. It covers everything from cybersecurity to critical infrastructures, digital education to skills, democracy to media. I want that digital Europe reflects the best of Europe – open, fair, diverse, democratic, and confident.”

 

Ursula von der Leyen, The President of the EU Commission (Feb. 19, 2020)

Below is a summary of aspects of the Commission’s plan that may impact US companies the most.

Risks-Focused Regulations

The EU’s plan for regulating AI technologies involves focusing on “high-risk” AI application having the greatest impact on the EU and its values, taking into account both the sector of the economy and the intended uses for the application.

Under the plan, high-risk AI applications are those identified in sectors of high impact, such as healthcare, transportation, energy, and portions of the public sector (services); those that produce legal or other similarly “significant effects” on individual and corporate rights; those that pose risks of injury, death, or significant material or immaterial damage; and those that produce effects that cannot reasonably be avoided by individuals or legal entities.  Examples of high-risk applications include those that impact worker and consumer rights.  Remote biometric monitoring (e.g., facial recognition) and other “intrusive surveillance technologies,” the Commission says, would always be classified as high-risk use cases.

Those in favor of risk-based approaches to regulating AI say it can provide the transparency, trustworthiness, and explainability that regulators and the public seek. A risk-based approach requires companies to closely scrutinize of every phase of AI system development, deployment, and post-deployment. For instance, evaluating cumulative risk could require a review of individual risks associated with selecting datasets, cleaning/processing data, selecting a model architecture, training a model, conducting accuracy evaluations, deploying the model, forecasting impacts and possible unexpected uses, and monitoring performance and impacts. Comprehensive risk analysis often require a significant initial and ongoing investment of time and money.  A challenge in any sort of risk analysis is setting acceptable standards, i.e., the number of bad outcomes that would be acceptable relative to a population.

Although the EU’s plan does not establish specific regulations for high-risk applications, it does suggest standards and criteria that could be establish for such things as training data, record- and data-keeping (what and for how long), information to be provided (to facilitate transparency), robustness and accuracy (to promote trustworthiness), human oversight (human-in-the-loop to avoid autonomy and adverse effects if left unchecked), and specific requirements for particular AI applications, such as those used for purposes of remote biometric identification.  The plan hints at what the scope of those standards and criteria might look like.

In the case of training data, for example, future regulations could require datasets to be sufficiently broad and cover all relevant intended use scenarios for a trained model, and that do not result in applications that discriminate, and those that protect privacy.  In the case of record-keeping, future regulations could require maintaining for a period of time datasets used in model development, and information about programming methodologies chosen (information that may need to be provided to regulators).  In the case of providing information, future regulations could require notice to consumers when an AI system is being used.  In the case of robustness, future regulations could impose a reproducibility requirement. In the case of human oversight, future regulations could require varying levels of intervention, from human review of the output of an AI system before it impacts others to real-time monitoring so that a human can intervene when necessary.

Many US companies already regularly perform risk analysis as part of normal business operations, for example as part of their financial reporting obligations under US Securities and Exchange Commission (SEC) regulations. Companies that are not subject to SEC rules likely follow generally recommended good business practices that include some aspect of assessing risk.  Even so, US companies operating in EU countries should consider whether their AI applications might be classified as high-risk under the EU’s plan and whether their AI applications might be classified as having an impact in one of the targeted sectors, and plan accordingly.

Apportioning Responsibility for Compliance, Liability, Compliance, and Enforcement

The Commission’s plan suggests that the burden of regulation should be on the actors who are best placed to address potential risks, meaning some regulations could apply to AI system developers and other regulations could burden so-called “deployers” of the technology.  Thus, a US-based company that develops an AI application and deploys it in the EU could face regulations applicable to development, deployment, and use of an AI technology.

To ensure compliance, the Commission may rely on conformity risk assessments performed by regulators prior to an AI application being used in the EU, which would be part of a process to verify and ensure that some of its mandatory requirements applicable to high-risk AI applications are being met.

Compliance may also be assessed in other ways, including verifying the data used for training an AI model, assessing the relevant programming and training methodologies used, and evaluating processes and techniques used to build, test and validate AI systems, all of which could require US companies to disclose confidential information (the Commission’s plan addresses procedures to safeguard trade secrets).  Of particular note to US companies, when a conformity assessment shows that an AI system does not meet applicable requirements, the identified shortcomings will need to be remedied, and could require a US company to re-train its model “in the EU in such a way as to ensure that all applicable requirements are met.”

“Our society is generating a huge wave of industrial and public data, which will transform the way we produce, consume and live. I want European businesses and our many SMEs to access this data and create value for Europeans – including by developing Artificial Intelligence applications. Europe has everything it takes to lead the ‘big data’ race, and preserve its technological sovereignty, industrial leadership and economic competitiveness to the benefit of European consumers.”

 

Thierry Breton, Commissioner for Internal Market (Feb. 19, 2020)

A Transformational Data Shift?

A major emphasis of the Commission’s plan (and companion data strategy) involves improving access to data, the foundation of any AI endeavor. As the Commission’s White Paper notes, the volume of data being produced each year is expected to reach 175 zettabytes (175 x 10^21  bytes) in 2025 (citing an IDC Report, 2018).  At current rates of data generation, two years from now there will be more information on Earth than there has been from the dawn of human history to the present.  (A. Zegart, 2020).

Much of that data, the Commission suggests, does not reflect the culture and diversity of the EU, especially in the areas of facial recognition. Thus, US companies with access to large datasets that include records better representing the EU could be well-positioned to leverage their data and AI-based models in the EU, and also potentially reduce regulatory liability risks.  On the other hand, US companies that deploy AI models in the EU built on what many consider to be US-centric dataset that may not reflect the diversity of EU member populations may face heightened scrutiny under the new regulations.

The Commission’s plan also makes clear that without sufficient and appropriate datasets, its efforts to build a competitive AI ecosystem may be stymied.  Its solution is to build cooperation among EU members and harness the “enormous volume of new data yet to be generated.”  A lot of that new data, says the Commission, will be less about people and more about processes.

By most accounts, the significant recent advances in AI and the surge in uses for AI-based products and services–led by US and Chinese tech companies–can be traced back to successful efforts to accumulate large datasets containing information about human activities.  The EU sees the  future shifting toward more industrial data.  According to the Commission’s data strategy, “the increasing volume of non-personal industrial data and public data in Europe, combined with technological change in how the data is stored and processed, will constitute a potential source of growth and innovation that should be tapped.” In fact, the Commission says this shift “constitutes an opportunity for Europe to position itself at the forefront of the data and AI transformation.”

Time will tell whether a shift in focus from behavioral (or human-centric data) to industrial process data (which could lead to better efficiencies and thus lower cost goods) will actually transform the AI industry inside or even outside the EU, where regulations on behavioral data are not as strong as they are in the EU (or do not exist at all).

Margrethe Vestager “We want every citizen, every employee, every business to stand a fair chance to reap the benefits of digitalisation. Whether that means driving more safely or polluting less thanks to connected cars; or even saving lives with AI-driven medical imagery that allows doctors to detect diseases earlier than ever before.”

 

Margrethe Vestager, Executive Vice-President for A Europe Fit for the Digital Age (Feb. 19, 2020)

The Race for AI Dominance Intensifies?

The Commission’s White Paper expresses a desire to “increase Europe’s technological sovereignty in key enabling technologies and infrastructures for the data economy.”  To reach its goals, the plan says member states “must act as one and define its own way…to promote the development and deployment of AI.”  It also references “European AI” as something distinguishable over AI systems developed elsewhere.

The formation of an AI ecosystem in Europe focused on creating European AI, built on industrial process data that reflects EU member values, along with imposing strict regulations on actors outside the EU, including on US companies making AI systems that are used in the EU, could help EU member countries compete alongside the US and China. On the other hand, the same efforts could prompt a number of different responses by regulated actors and their governments outside the EU, depending on how restrictive or competitive they view the Commission’s approach and how it effects their own efforts to seek dominance over the AI industry.