The European regulation on artificial intelligence, known as the AI Act, was approved in March 2024. Euroregulators were very proud of this achievement. Thierry Breton, the European Commissioner, had tweeted months earlier, proudly declaring that Europe was the only continent with regulations on AI.
Deal!#AIAct pic.twitter.com/UwNoqmEHt5
— Thierry Breton (@ThierryBreton) December 8, 2023
The problem is that the AI Act could become another hurdle that keeps Europe from leading in the field of artificial intelligence. A clear example is Meta’s new Llama 3.1 models.
Deal!#AIAct pic.twitter.com/UwNoqmEHt5
— Thierry Breton (@ThierryBreton) December 8, 2023
Llama 3 Models Use Too Many TFLOPS
In the technical details for the Llama 3 model family, including the 3.1, Meta describes the “scale” of these models:
“We trained a model at a much larger scale than previous Llama models: our main language model was pre-trained using 3.8 × 10^25 FLOPs, almost 50 times more than the largest version of Llama 2. Specifically, we pre-trained a main model with 405 billion trainable parameters on 15.6 trillion text tokens.”
This computing power used for training these models is much greater than what the AI Act specifies to avoid being considered a systemic risk. The regulation refers to general-purpose AI (GPAI) and states:
“GPAI models pose systemic risks when the total amount of computation used to train them exceeds 10^25 floating point operations (FLOPs). Vendors must inform the Commission within two weeks if their model meets this criterion. They may argue that, despite meeting the criteria, their model does not pose systemic risks. The Commission may decide, either on its own or based on a qualified alert from an independent scientific group, that a model has high-impact capabilities and thus poses a systemic risk.”
The limit set by the AI Act is almost four times less than the computing power used to train the Llama 3 models. Therefore, these models would likely be considered a systemic risk.
This doesn’t mean their use is banned, but it does mean that providers must show that they do not pose a threat to citizens and society.
Read Also: Mark Zuckerberg Keeps Saying That His AI Model Is Open Source, But He Is Misusing the Term
How Powerful Is Too Powerful AI?
The European Union’s goals for regulating artificial intelligence are reasonable and aimed at the long term, while technology companies are often focused on short-term gains and market leadership.
These differing perspectives lead to conflicts. The rapid pace of technology, especially AI, means that regulations risk being outdated or ineffective even before they are implemented.
This seems to be happening with the AI Act, which, with its current requirements, limits providers and innovators in the field. There is a good intention here to prevent problems before they arise, but there is also uncertainty about how powerful an overly powerful AI might be and whether being too powerful actually poses a risk.
The European Union is aware of the rapid pace of technological change. In its FAQ section on AI regulation, officials question whether the law is prepared for the future and note that the AI Act will be revised to adapt to new developments. This includes updating the FLOPS thresholds, adding criteria for classifying general-purpose AI models as systemic risks, and modifying test plans:
“The AI Law may be amended by delegated and implementing acts, including updating the FLOPS threshold (delegated act), adding criteria for classifying general-purpose AI models as systemic risks (delegated act), and changing the procedures for creating controlled test spaces and real-world test plans (implementing acts).”
This creates a complex situation for regulators and suppliers. The debate between regulation and innovation continues, but for now, it seems Europe may be falling behind.
Read Also: Introducing MAI-1: Microsoft’s Latest Artificial Intelligence Challenger to OpenAI