EXPLAINER

AI law kicks in: What you need to know about the new European AI Act

The EU AI Act, which kicked in marksA new era of artificial intelligence regulation

Last updated:
Jay Hilotin, Senior Assistant Editor
5 MIN READ
'Big Tech' is already responding to the European AI Act, which kicked in on August 2, 2025.
'Big Tech' is already responding to the European AI Act, which kicked in on August 2, 2025.
X

On August 2, 2025, the European Union made history – by enforcing the world’s first comprehensive artificial intelligence (AI) law, the EU AI Act

This landmark legislation marks the end of unregulated AI.

According to the European Parliament, the world’s first comprehensive AI law would make it more transparent, curb excesses and safe for everyone.

It sets strict rules for general-purpose AI (GPAI) models like ChatGPT, Gemini, Grok, Perplexity, Midjourney, Llama and Claude across all 27 EU member states. 

We explore the EU AI Act’s implications, its global reach, and what it means for the future of AI development.

What is the EU AI Act?

It is the bloc's first comprehensive law regulating AI.

It forms part of the EU's digital strategy, as the bloc seeks to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. 

The Act recognises that AI can create numerous benefits — such as better healthcare, safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy.

What are the timelines?

The EU AI Act took effect on August 1, 2024 and sets harmonised rules for the development, marketing, and use of AI systems within the EU through a risk-based approach.

Major compliance deadlines started on August 2, 2025 and 2026 onwards.

What is the key feature of the EU AI legislation?

The Act categorises AI into four risk tiers — unacceptable, high, limited, and minimal — with stricter rules for higher-risk systems. 

What is the AI risk-based system?

The Act classifies AI systems into different risk levels:

  • Unacceptable risk AI systems (like government-run social scoring) are banned outright.

  • High-risk AI systems (e.g., AI used in job applicant screening) face strict legal requirements for safety, transparency, and human oversight. It is the most regulated AI system, as these have the potential to cause significant harm of they fail or are misused, e.g. if used in law enforcement or recruiting.

  • Limited risk: Includes AI systems with a risk of manipulation or deceit, e.g. chatbots or emotion recognition systems. Humans must be informed about their interaction with AI.

  • Minimal/low-risk AI systems have lighter regulation with transparency obligations. All other AI systems, i.e. spam filter, which can be displayed without additional restrictions.

What are the penalties for non-compliance?

With fines reaching up to 7% of a company’s global revenue — potentially $700 million for OpenAI, $24.5 billion for Google, or $11.5 billion for Meta — the stakes are monumental. 

Why was it enacted?

According to relevant EU documents, the Act aims to protect people’s fundamental rights, health, and safety while fostering innovation.

It requires AI providers to conduct risk assessments and follow conformity procedures before placing high-risk AI on the market.

It is expected to set a global standard for AI governance, similar to how the EU’s General Data Privacy Regulations (GDPR) influenced data privacy worldwide.

Overall, the Act seeks to ensure AI in Europe is safe, ethical, transparent, and beneficial to society, balancing risk management with innovation acceleration.

Who will implement the Act?

Governance will be overseen by national authorities cooperating through a European Artificial Intelligence Board. The law also proposes support for innovation, especially for small and medium enterprises, through regulatory sandboxes.

What is the August 2025 deadline about?

The August 2025 deadline specifically targets GPAI models, which power applications like chatbots, image generators, and even creative transformations, such as animating the Mona Lisa to smile and wave.

What does the EU AI Act mandate?

The Act imposes rigorous obligations on AI providers, including:

  • Registration: All GPAI providers must register with the EU AI Office in Brussels.

  • Transparency: Companies must document training data sources, conduct risk assessments for “systemic risk” models, and report copyright compliance.

  • Accountability: Providers must disclose model capabilities and limitations to ensure transparency.

What if companies do not comply?

Non-compliance carries severe penalties. 

For instance, based on 2024-2025 revenue projections, a 7% fine could cost Alphabet $24.5 billion or Meta $11.5 billion, dwarfing penalties in other regulatory frameworks.

Why this matters

The EU AI Act is not just a regional policy — it’s seen as a global game-changer. 

Any company offering GPAI models in the EU, whether based in Silicon Valley or Shanghai, must comply. 

This “extra-territorial reach” mirrors the EU’s General Data Protection Regulation (GDPR), which became a global privacy standard after its 2018 implementation. 

As German regulators launch an “AI Service Desk” and other countries adopt similar frameworks, the EU AI Act is poised to define “safe AI” worldwide.

What is the response to the EU AI Act?

Google and OpenAI have pledged compliance, with Google signing a voluntary AI Code of Practice. 

Meta, however, has challenged the rules as “legally questionable,” a risky stance given the potential fines. 

Smaller AI companies face even greater challenges, as the Act’s documentation and compliance requirements demand significant resources, potentially stifling innovation.

The challenges of compliance

Behind the scenes, the EU AI Act is reshaping AI development. 

Companies must maintain detailed inventories of AI systems, document training data, and prove copyright compliance.

This would be a logistical nightmare for smaller players.

The Act also introduces immediate obligations for any modifications to existing models, prompting some firms to split EU and global model deployments. 

What is the effect so far?

Venture capital funding for EU-focused AI startups is already declining as investors grapple with regulatory uncertainty.

Critics, like Andrew Orlowski in The Telegraph, argue that the rush to adopt AI without robust regulation risks a “race to the bottom” in quality and ethics. 

Meanwhile, computer scientist Yejin Choi, in a TED Talk, highlighted AI’s paradox: it’s “incredibly smart and shockingly stupid,” capable of transformative feats yet prone to errors that demand oversight. 

The EU AI Act addresses these concerns by prioritizing accountability and safety.

What’s next?

The August 2025 deadline is just the beginning. 

By August 2, 2026, the EU AI Office will gain full enforcement powers, and by August 2027, all AI models — including those developed before 2025 — must comply. 

The next phase will target high-risk AI systems in sectors like employment, education, healthcare, and critical infrastructure, potentially reshaping entire industries.

Some of the ripple effects: Companies are rethinking their AI strategies, and global regulators are watching closely.

The EU’s framework could inspire similar laws elsewhere, just as GDPR did. However, the Act’s critics warn of “overregulation” stifling innovation, particularly for smaller firms unable to bear compliance costs.

The big picture

The EU AI Act signals a regulatory reckoning for AI. 

While the US and China focus on advancing AI capabilities, Europe is setting the global standard for responsible development. 

As Brussels becomes the de facto AI regulator, companies face a stark choice: adapt or risk crippling fines. 

For users, this could mean safer, more transparent AI systems — but it also raises questions about innovation and accessibility in an increasingly regulated world.

Jay HilotinSenior Assistant Editor
Related Topics:

Sign up for the Daily Briefing

Get the latest news and updates straight to your inbox

Up Next