BRUSSELS — The European Parliament voted decisively on Saturday to approve the Artificial Intelligence Act, the world's first comprehensive legal framework for artificial intelligence. The landmark legislation, years in the making, imposes strict rules on high-risk AI systems, bans certain "unacceptable" applications, and creates a regulatory model expected to influence global standards.
The vote — 523 in favor, 46 against, with 49 abstentions — marks a historic moment in technology governance. "Europe has established itself as the global standard-setter for trustworthy AI," said European Commission President Ursula von der Leyen. "This is not about stifling innovation — it's about putting people first."
What the AI Act actually does
The regulation takes a risk-based approach, categorizing AI applications into four tiers: unacceptable, high-risk, limited-risk, and minimal-risk. Systems deemed "unacceptable" are banned outright — including social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), and AI that manipulates human behavior.
Key provisions of the EU AI Act
- 🚫 Banned practices: Social scoring, real-time facial recognition in public (with limited exceptions), subliminal manipulation, and predictive policing based on profiling.
- ⚠️ High-risk systems: AI used in critical infrastructure, education, employment, law enforcement, and migration must undergo conformity assessments and registration in an EU database.
- 📝 Transparency obligations: Chatbots, deepfakes, and emotion recognition systems must clearly disclose they are AI-generated.
- 💰 Fines: Violations can result in fines up to €35 million or 7% of global annual turnover — similar to GDPR's penalty structure.
- ⏰ Timeline: Most provisions take effect in 2027, but banned practices apply within six months.
Tech industry reaction: Mixed and measured
Major tech companies responded with cautious support mixed with concerns about compliance costs. OpenAI CEO Sam Altman welcomed the framework but warned about "overly prescriptive rules for general-purpose AI." Google and Microsoft both issued statements pledging to comply while seeking clarifications on certain provisions.
"The AI Act creates regulatory certainty," said a Google spokesperson. "We will work constructively with EU authorities to ensure our products meet the highest safety standards." However, some smaller AI startups expressed anxiety about compliance burdens favoring deep-pocketed incumbents.
Global ripple effects: The 'Brussels Effect' in action
Just as GDPR became the de facto global standard for data privacy, the AI Act is expected to reshape AI governance worldwide. Companies selling AI products in the EU's 450-million-person market will have to comply, likely leading to global adoption of similar standards.
Already, lawmakers in Canada, Brazil, Japan, and South Korea are studying the EU framework. The United States, meanwhile, has taken a more sectoral approach, though the White House recently issued an executive order on AI safety. China, which has its own AI regulations focused on algorithmic recommendation and deepfakes, is also watching closely.
"This is a defining moment," said Anu Bradford, Columbia Law professor and author of 'The Brussels Effect.' "The EU has once again used its market power to export its regulatory standards to the rest of the world. Companies building AI for global markets will design with EU rules in mind from day one."
What happens next
The AI Act will be formally signed into law in the coming weeks. The European AI Office, a new regulatory body, will oversee enforcement. Industry groups have two years to prepare for full compliance, though banned practices must cease within six months.
For consumers, the changes may be invisible at first — but experts say the law will fundamentally reshape how AI is developed and deployed, with greater emphasis on safety, transparency, and fundamental rights.
This story is part of SKY Today's ongoing technology coverage.