The European Union's AI Act, established as the world's first comprehensive artificial intelligence regulatory framework, has been criticized for potentially hindering the region's AI innovation. With only a few of its provisions currently in effect, the Act has already been seen as a blueprint that may slow down AI development.
The EU's approach has placed it behind the US and China, where AI development continues to flourish. Unlike the EU, the US has prioritized substantial investments in AI technology while avoiding stringent regulations. This strategy has allowed the development of some of the world's top AI models, whereas the EU has managed to develop only one competitive large language model, known as Mistral.
The EU AI Act implements a risk-based framework categorizing AI systems by perceived risk level: unacceptable, high, limited, and minimal. High-risk systems, which could potentially harm public health, safety, or rights, face strictest regulations, impacting their development and market introduction due to burdensome compliance requirements.
Critics argue these regulatory demands could stifle innovation. Requirements like maintaining detailed logs, high-quality datasets, and human oversight significantly increase costs, delay launches, and can be practically unachievable. Additionally, the broad definition of AI systems complicates compliance, encompassing a wide array of technologies, including simple algorithms and automation tools.
The potential fines for non-compliance, up to $36 million or 7% of global turnover, further intensify these concerns. Such strict regulations could push AI firms to operate outside the EU, exacerbating the technological gap between Europe and countries like the US and China.
While the EU's regulatory trailblazing is intended to safeguard against AI risks, the move could serve a cautionary tale for other jurisdictions like the US, which so far has shown restraint in overregulating the industry, maintaining its leadership in AI development.