The EU AI Act
The world’s first comprehensive AI law entered force in August 2024. It applies to any company whose AI touches EU citizens, regardless of where the company is headquartered.
Phased enforcement:
• Feb 2025: Prohibited practices banned (social scoring, real-time biometric surveillance in public spaces, manipulation of vulnerable groups)
• Aug 2025: General-purpose AI obligations active (transparency, documentation for foundation models)
• Aug 2026: High-risk AI obligations fully enforceable (risk management systems, data governance, human oversight, technical documentation)
Penalties: Up to €35 million or 7% of global annual turnover. Documented compliance effort is a formal mitigating factor — meaning the effort to comply matters even if you fall short.
Risk Categories
The EU AI Act classifies AI systems by risk level:
Unacceptable risk (banned):
Social scoring, manipulative AI, real-time biometric surveillance (with narrow exceptions).
High risk (heavy regulation):
AI in hiring, credit scoring, healthcare, law enforcement, education, critical infrastructure. Requires risk management systems, data governance, human oversight, and technical documentation.
Limited risk (transparency obligations):
Chatbots, deepfakes, emotion recognition. Must disclose that users are interacting with AI.
Minimal risk (no specific obligations):
Spam filters, AI in video games, most recommendation systems. Still subject to general consumer protection law.
PM action: Classify your AI product’s risk level under the EU AI Act. If high-risk, start compliance work now — the August 2026 deadline requires risk management systems, technical documentation, and post-market monitoring to be in place, tested, and defensible. If limited risk, ensure your product clearly discloses AI involvement.