EU lawmakers reach preliminary accord on rules for foundational models/GPAIs
After more than 20 hours of negotiations, European Union lawmakers are still engaged in talks to create regulations for artificial intelligence (AI). A leaked proposal suggests that a preliminary agreement has been reached on how to handle foundational models or general purpose AIs (GPAIs). While industry lobbyists have pushed for a complete regulatory exemption for these advanced AIs, the proposal retains elements of the tiered approach suggested by the parliament earlier this year.
EU lawmakers have reached a preliminary agreement on the regulation of foundational models and general purpose AIs (GPAIs). The proposal includes a partial carve-out for open source GPAIs, classification of GPAIs with systemic risk, and obligations for providers of GPAIs. The agreement also mentions the use of codes of practice and the establishment of an AI Office to oversee compliance. However, discussions on other contentious elements of the AI Act are still ongoing, and the ultimate fate of the regulation remains uncertain.
Partial carve-out for open source GPAIs
The leaked proposal reveals that GPAI systems provided under free and open-source licenses would receive a partial exemption from certain obligations. These obligations include making weights, model architecture information, and model usage publicly available. However, exceptions still apply to “high-risk” models. The proposal also states that the exemption for open-source models is limited to non-commercial deployment.
Systemic risk designation for GPAIs
The preliminary agreement classifies GPAIs with “systemic risk” based on their high-impact capabilities. To qualify as “systemic risk,” a model must have a cumulative training compute power greater than 10^25 floating point operations (FLOPs), posing potential negative effects on public health, safety, security, fundamental rights, or society as a whole. Only a few cutting-edge GPAIs would currently meet this threshold, easing the regulatory burden.
Obligations for providers of GPAIs with systemic risk
The proposal outlines several obligations for providers of GPAIs with systemic risk. These include evaluation using standardized protocols, documenting and reporting incidents, conducting adversarial testing, ensuring cybersecurity, and reporting energy consumption. The AI Office would decide the classification of GPAIs with systemic risk, with scientific panels issuing “qualified alerts.” Notification to the Commission would be required for models that meet the criteria.
Other obligations for providers of GPAIs
Providers of GPAIs that do not qualify as having systemic risk would still have additional obligations. These include testing, evaluation, and technical documentation, which need to be provided to regulatory authorities and oversight bodies upon request. Model makers would also need to develop a policy to respect EU copyright law, disclose training data used to build the model, and provide an overview of the model’s capabilities and limitations to downstream deployers.
Codes of practice and AI governance
The proposal mentions the use of codes of practice as a means for GPAIs to demonstrate compliance until harmonized standards are published. The AI Office would be involved in drawing up such codes. The Commission would issue standardization requests for GPAIs, focusing on areas such as reporting, documentation, and improving energy and resource use. The progress on developing these standardized elements would be regularly reported.
Remaining contested elements of the AI Act
The trilogue on the AI Act, involving the European Council, Parliament, and the European Commission, is still ongoing. Some highly sensitive issues, including biometric surveillance for law enforcement purposes, are yet to be resolved. The fate of the AI Act depends on reaching an agreement on all components. The Commission aims to finalize the risk-based AI rulebook proposed in April 2021 this week, but success is not guaranteed.