European Parliament Advances Groundbreaking AI Regulation: The European AI Act

A key committee of lawmakers in the European Parliament has taken a significant step towards enacting pioneering legislation that regulates artificial intelligence (AI) systems. Known as the European AI Act, this law marks a groundbreaking development in the race among global authorities to effectively govern AI technologies, which are rapidly evolving. While China has already drafted rules to manage generative AI products like ChatGPT, the European AI Act represents the first legislation of its kind in the Western world.

Taking a risk-based approach, the AI Act establishes different obligations for AI systems based on the level of risk they pose. It categorizes AI applications into four levels: unacceptable risk, high risk, limited risk, and minimal or no risk. Unacceptable risk applications are prohibited by default and cannot be deployed within the European Union.

The prohibited applications include AI systems that employ subliminal or deceptive techniques to manipulate behavior, exploit vulnerabilities of individuals or specific groups, use biometric categorization based on sensitive attributes, engage in social scoring or evaluating trustworthiness, predict criminal or administrative offenses, create or expand facial recognition databases without targeted consent, and infer emotions in law enforcement, border management, the workplace, and education.

Recognizing the concerns surrounding foundation models like ChatGPT, the AI Act introduces specific requirements for their providers. Developers of foundation models will be mandated to implement safety checks, adopt data governance measures, and apply risk mitigation strategies before making their models publicly available. Moreover, they must ensure that the training data used to inform their systems comply with copyright law.

While the European Parliament’s approval of the AI Act is a significant milestone, it is important to note that the legislation still has a long way to go before becoming law. However, the Act represents a crucial step towards establishing guidelines for AI companies and organizations within the European Union.

The tech industry has raised concerns about the broadened scope of the AI Act, fearing that it may inadvertently capture harmless forms of AI. The Computer and Communications Industry Association (CCIA) cautioned that the Act’s amendments assume that broad categories of AI are inherently dangerous, potentially subjecting useful AI applications to stringent requirements or even bans.

Experts believe that the European AI Act will set a global standard for AI regulation. However, other jurisdictions such as China, the United States, and the United Kingdom are also swiftly developing their own responses to AI regulation. It is expected that these jurisdictions will closely monitor the AI Act negotiations and tailor their own approaches accordingly.

Dessi Savova, head of continental Europe for the tech group at Clifford Chance, stated that the AI Act would put into law many of the ethical AI principles advocated by organizations. Sarah Chander, a senior policy adviser at European Digital Rights, highlighted that the Act would necessitate testing, documentation, and transparency requirements for foundation models like ChatGPT, thus bringing increased accountability to AI development.

With the EU’s AI Act taking center stage, it is likely to play a pivotal role in shaping AI legislative initiatives worldwide, just as the General Data Protection Regulation did in the realm of data protection. As countries around the globe grapple with the regulation of generative AI, the EU’s efforts are set to influence the international landscape and position the region as a leading standards-setter once again.