Skip to main content

In preparation for the May deadline to finalize guidance for general-purpose AI (GPAI) model providers on complying with the EU AI Act’s provisions related to Big AI, a third draft of the Code of Practice has been released. This draft, which has been in development since last year, is expected to be the final revision before the guidelines are formalized in the coming months. The Code aims to assist GPAI model makers in understanding how to meet legal obligations and avoid non-compliance sanctions, which can reach up to 3% of global annual turnover.

To enhance the Code’s accessibility, a dedicated website has been launched, where written feedback on the latest draft can be submitted until March 30, 2025. The EU AI Act’s risk-based rulebook for AI includes a subset of obligations applicable only to the most powerful AI model makers, covering areas such as transparency, copyright, and risk mitigation.

Streamlined

The latest revision of the Code boasts a more streamlined structure with refined commitments and measures, based on feedback from the second draft published in December. Further feedback, working group discussions, and workshops will contribute to the process of finalizing the guidance, with experts aiming to achieve greater clarity and coherence in the final adopted version.

The draft is organized into sections covering commitments for GPAIs, detailed guidance for transparency and copyright measures, and safety and security obligations for the most powerful models. The guidance includes an example of a model documentation form that GPAIs might be expected to fill in to ensure downstream deployers have access to key information for compliance.

The copyright section remains a contentious area, with the current draft using terms like “best efforts” and “reasonable measures” when complying with commitments such as respecting rights requirements for data acquisition and mitigating the risk of copyright-infringing outputs.

Language from an earlier iteration of the Code, which suggested GPAIs should provide a single point of contact and complaint handling for rightsholders, has been removed. Instead, the current text states that signatories will designate a point of contact for communication with affected rightsholders and provide easily accessible information about it.

The current text also suggests GPAIs may be able to refuse to act on copyright complaints if they are “manifestly unfounded or excessive,” which could lead to creatives being ignored if they attempt to use AI tools to detect copyright issues and automate filing complaints.

Regarding safety and security, the EU AI Act’s requirements for evaluating and mitigating systemic risks apply only to a subset of the most powerful models, but the latest draft sees some previously recommended measures being narrowed in response to feedback.

US Pressure

The EU press release on the latest draft does not mention the US administration’s criticisms of European lawmaking and the bloc’s AI rules. However, US Vice President JD Vance has dismissed the need to regulate AI safety, instead emphasizing “AI opportunity” and warning Europe that overregulation could harm the industry.

Since then, the EU has abandoned the AI Liability Directive and is planning an “omnibus” package of simplifying reforms to reduce red tape and bureaucracy for businesses. With the AI Act still being implemented, there is pressure to dilute requirements, and the European Commission is producing clarifying guidance that will shape how the law applies.

The AI Office, which oversees enforcement and other activity related to the law, will provide further guidance “in due time” to clarify the scope of the rules. This could offer a pathway for lawmakers to respond to US lobbying to deregulate AI. Mistral, a French GPAI model maker, has claimed difficulties in finding technological solutions to comply with some rules and is working with regulators to resolve the issues.


Source Link