The European Commission’s proposed new regulations for artificial intelligence (AI) technologies and systems (link to PDF here; issued April 21, 2021) include enforcement provisions that would empower public authorities to monitor regulated AI entities operating in the European Union (EU) and seek stiff fines from those that do not comply with the rules. The proposed regulations would also grant authorities the power to impose non-monetary penalties, including ordering offending companies to remove their AI systems from the EU market. These are some tough measures, assuming public authorities exercise their discretion in a way that actually incentivizes compliance and positive behavior.

As a command-and-control system, the Commission’s proposed enforcement rules rely almost entirely on public authorities conducting “market surveillance activities” to identify AI systems not complying with the regulations or other applicable EU laws.  Those authorities will have access to company datasets, source code, and relevant risk assessment and technical documentation to assist them in their investigations of AI systems and practices. Once problems are identified, authorities can require regulated entities take immediate corrective action to achieve compliance, withdraw their AI systems from the market, or recall AI system within a reasonable time period, depending on the nature and degree of potential risk to Union citizens.

In addition, the proposed rules would grant authorities the power to impose significant monetary fines on violators, including up to 2-percent, 4-percent, or 6-percent of a company’s world-wide revenues, depending on the nature of the company’s AI system and non-compliance.  The specific penalty amount imposed or corrective action demanded is within the public authorities’ sole discretion.  Thus, the effectiveness of the Commission’s enforcement plan will largely depend on the assertiveness of authorities in the exercise of their discretionary law enforcement powers.

But even the most zealous enforcement hawks will likely attract at least some criticisms when their actions are not perceived as being tough enough on violators.

Moreover, the Commission’s enforcement framework puts a tremendous burden on resource-strained public agencies to understand highly-specialized and nuanced technologies (in the case of AI, one that is evolving), and set clear and understandable expectations consistent with the regulations, often through issuance of frequently updated policies (which can be one size fits all), guidance, and enforcement decision documents.  The same authorities are then required to police the industry to spot rule-breakers, not an easy task.  When agencies lack adequate resources, catching every instance of non-compliance becomes challenging, the result being an increase in potential risk to the public.

What are some alternative approaches? For one, adding a robust public participation feature to the Commission’s proposed rules could lessen the burden on public authorities.  For example, allowing interested stakeholders to review administrative decisions (including determinations or assessments made by “notified bodies”) before they are finalized could enhance agency subject matter expertise (not to mention, it could be used to hold authorities accountable for actions perceived as being unfair, unreasonable, or inconsistent across the industry).

Another option is to provide for citizen suits, which would give stakeholders the right to sue decision-makers whose enforcement decisions are disputed, or to directly sue regulated entities on behalf of the public when violations of the rules are identified. Separately, providing a private right of action mechanism would allow individuals harmed by rule violators a means to seek damages from AI companies. Together, these mechanisms could provide greater incentive for regulated AI companies to comply with the rules once they are adopted.

That said, citizen suits and private rights of action are generally not favored by lawmakers over command-and-control enforcement schemes for a number of reasons.  For instance, public participation in the administrative process can be superficial at best (in this case, if the public has no access to company datasets, source code, and other sensitive and proprietary information needed to perform a comprehensive review of an AI system, fairly assessing an agency’s actions could be stymied). Moreover, private rights of action are controversial, as they can result in significant class action litigation over even the most minor incidence of non-compliance, as the history of Illinois’ Biometric Information Privacy Act (BIPA) has demonstrated.

Even so, elements of public participation, citizen suits, and private litigation in some form could strengthen the Commission’s proposed command and control-styled enforcement framework.

The post Are Europe’s Proposed AI Regulations Tough Enough? first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.