The European Union has released the first draft of its Code of Practice for general-purpose AI (GPAI) models, offering a clearer roadmap for tech giants to manage risks and avoid penalties. Though this draft isn’t set in stone — the final version is expected in May 2025 — it gives companies like OpenAI, Google, Meta, Anthropic, and Mistral a preview of what’s coming their way.
The EU’s AI Act, which officially went into effect on August 1, laid the foundation for these new guidelines. However, there were gaps left in regulating more complex AI systems. This draft, published on Thursday, fills in some of those blanks and invites feedback from stakeholders to tweak the framework before it becomes enforceable.
At the heart of this draft are rules around transparency, copyright, risk assessment, and governance. Models falling under these regulations are defined as those trained with computing power exceeding 10 septillion FLOPs, setting a high bar that captures the major players in the AI world but could expand to more companies as technology evolves.
Transparency is a significant theme. AI developers will be required to disclose the web crawlers used in training their models, a move designed to address copyright concerns from creators. The draft also outlines risk assessment protocols, aimed at preventing issues like cybercrimes, AI-driven discrimination, and — in a nod to those sci-fi nightmares — runaway AI scenarios.
Companies will also need to adopt a Safety and Security Framework (SSF) to manage these risks. This involves detailing their risk mitigation strategies and updating them in line with evolving threats. On the technical side, AI makers must safeguard their models, implement failsafe mechanisms, and continuously reassess these measures.
Impact Shorts
More ShortsGovernance rules push for internal accountability, demanding that companies conduct regular risk assessments and even bring in external experts for oversight.
Of course, the stakes are high. Violating the AI Act could mean severe penalties: fines of up to €35 million or seven per cent of a company’s global annual revenue, whichever is greater. The EU’s track record of levying hefty fines for tech breaches suggests these aren’t empty threats.
Feedback from industry players is being collected on the Futurium platform until November 28, giving them a chance to influence the final guidelines. The clock is ticking, but there’s still time for input before the rules are locked in and enforcement begins. With the potential for big changes in how AI is governed, all eyes will be on the evolution of these guidelines in the months to come.