By Asmita - Feb 11, 2025
The EU AI Act, implemented in phases starting from August 1, 2024, aims to regulate AI use for safety and transparency. It categorizes AI systems based on risk and requires compliance from businesses operating in the EU. Non-compliance could result in significant fines. The U.S. Vice President warns against excessive regulation in the AI industry, opposing the EU's strict approach, while European leaders emphasize the importance of regulations to build trust and prevent public rejection of AI.
A robot seated at a table, engaged with a laptop, showcasing a blend of technology and modern workspace. via Creativecommons.org
LATEST
The EU AI Act, the first law of its kind, aims to regulate the use of AI to make it safer and more secure for both public and commercial applications. It intends to mitigate risks, ensure human control, reduce negative environmental and societal impacts, protect data safety and privacy, and ensure transparency in AI use. The Act categorizes AI systems into four risk categories: unacceptable, high, limited, and minimal. As of February 2, 2025, the first phase of implementation is in effect, banning AI systems that pose unacceptable risks and requiring organizations operating in the European market to ensure AI literacy among employees involved in AI deployment. The EU AI Act's phased approach involves different regulatory requirements that are triggered at 6–12 month intervals from its entry into force on August 1, 2024. Obligations for providers of general-purpose AI models and provisions on penalties will apply from August 1, 2025, along with rules governing general-purpose AI systems that comply with transparency requirements. The EU’s AI legislation will begin applying to high-risk AI systems from August 1, 2026, and August 1, 2027.
Compliance with the EU AI Act is essential for businesses operating in the EU and incorporating AI into their operations. Compliance includes identifying AI categories, assessing risk levels, implementing AI governance frameworks, and ensuring transparency. By prioritizing AI compliance, businesses can mitigate legal risks and strengthen trust in their AI systems. The EU AI Act’s ban on prohibited practices took effect on February 2, 2025, requiring deployers and providers of AI systems to cease using AI systems that leverage prohibited AI practices. Non-compliance may result in fines of up to EUR35 million or 7% of the global annual turnover of the preceding year. By May 2, 2025, the European Artificial Intelligence Office plans to issue Codes of Practice to guide Providers of General-Purpose AI Models3. On August 2, 2025, the European Commission plans to issue guidance to facilitate the reporting of serious AI System incidents by Providers of High-Risk AI Systems.
In contrast to the EU's approach, U.S. Vice President JD Vance has warned that excessive regulation of the AI sector could stifle a transformative industry. Speaking at an AI summit in Paris, Vance argued against tightening governments' grip on AI, cautioning that over-regulation could harm the sector. He criticized the EU's Digital Services Act for its "massive regulations" regarding content moderation and policing misinformation, which he said placed an unfair burden on American tech giants. Vance also stated the U.S. intends to remain the dominant force in AI and strongly opposed the European Union's tougher regulatory approach. He stressed that AI must remain free from ideological bias and not be co-opted into a tool for authoritarian censorship. Vance also took aim at China, suggesting that partnering with them could lead to infiltration and seizure of information infrastructure.
Despite concerns raised by the U.S., European leaders, including French President Emmanuel Macron, defended the need for regulation to ensure trust in AI. Macron stressed that regulation is essential to prevent public rejection of AI. European Commission chief Ursula von der Leyen stated the EU would cut red tape and invest more in AI. The EU's AI legislation is designed to establish a sector-agnostic regulatory framework to shape AI governance and oversight across the EU. The Act's reach extends beyond the EU, potentially subjecting companies operating outside of Europe, including in the United States, to its requirements. The European Union sees AI regulation as essential for fostering innovation while safeguarding fundamental rights and ethical standards.