EU’s Groundbreaking AI Act: A New Era of Digital Regulation Begins


The European Union has taken a significant step in regulating artificial intelligence (AI) by enabling regulators to ban AI systems considered to pose “unacceptable risk” or harm. This move comes as part of the EU’s AI Act, a comprehensive regulatory framework approved last March and enforced from August 1, with the first compliance deadline on February 2. The Act categorizes AI applications into four risk levels, from minimal to unacceptable risk, with the latter including AI used for social scoring, manipulative decision-making, exploiting vulnerabilities, predictive crime commission based on appearance, biometric inferences, real-time biometric data collection for law enforcement, emotion inference at work or school, and unauthorized facial recognition database creation.

Companies utilizing AI applications deemed unacceptably risky face fines up to €35 million or 7% of their annual revenue, whichever is greater, with enforcement provisions taking effect in August. Over 100 companies, including Amazon, Google, and OpenAI, have voluntarily pledged to align with the AI Act’s principles ahead of its application, though some, like Meta and Apple, have not signed the pact. The Act also outlines exceptions for law enforcement and medical or safety justifications under strict conditions. The European Commission plans to release additional guidelines in early 2025, aiming to clarify the AI Act’s interaction with other legal frameworks like GDPR, NIS2, and DORA.
Read more at TechCrunch…