The EU Artificial Intelligence (AI) Act, which went into effect on August 1, 2024.
This act implements a risk-based approach to AI regulation, categorizing AI systems based on the level of risk they pose. High-risk systems, such as those used in healthcare, transport, and law enforcement, face stringent requirements, including risk management, transparency, and human oversight.
Key provisions of the AI Act include:
- Transparency and Safety Requirements: AI systems must be designed to be safe, transparent, and easily understandable to users. This includes labeling requirements for AI-generated content, such as deepfakes (Engadget).
- Risk Management and Compliance: Companies must establish comprehensive governance frameworks to assess and manage the risks associated with their AI systems. This includes compliance programs that cover data privacy, ethical use, and geographical considerations (Faegre Drinker Biddle & Reath LLP) (Passle).
- Copyright and Data Mining: Companies must adhere to copyright laws when training AI models, obtaining proper authorization from rights holders for text and data mining unless it is for research purposes (Engadget).
- Prohibitions and Restrictions: AI systems that manipulate behavior, exploit vulnerabilities, or perform social scoring are prohibited. The act also sets out specific rules for high-risk AI applications and imposes fines for non-compliance (Passle).
For US tech firms, compliance with the EU AI Act is critical due to the EU’s significant market size