AI Act: Establishing Rules Based on Risk Levels for Artificial Intelligence
The new AI Act introduces different rules for different risk levels associated with artificial intelligence systems. The regulations aim to establish obligations for providers and users based on the level of risk posed by the AI technology.
For AI systems deemed to pose an unacceptable risk, such as those that threaten people’s safety or fundamental rights, strict measures will be implemented. These include banning AI systems that engage in cognitive behavioral manipulation, social scoring, biometric identification, and real-time remote biometric identification. Exceptions may be allowed for law enforcement purposes, but only in limited and serious cases.
AI systems categorized as high risk will be subject to thorough assessments before being allowed on the market and throughout their lifecycle. These systems will be divided into two categories: those used in products falling under the EU’s product safety legislation and those operating in specific critical areas that require registration in an EU database. Individuals will have the right to file complaints about high-risk AI systems to designated national authorities.
Transparency requirements will also be enforced for AI systems, including generative AI like ChatGPT. Such systems will need to disclose that the content was generated by AI, prevent the generation of illegal content, and publish summaries of copyrighted data used for training. High-impact general-purpose AI models will undergo evaluations, and any serious incidents must be reported to the European Commission. Additionally, content modified or generated with AI assistance, such as deepfakes, must be clearly labeled as AI-generated.
The AI Act also aims to support innovation by providing start-ups and small and medium-sized enterprises with opportunities to develop and train AI models in a testing environment that simulates real-world conditions. This initiative is designed to foster responsible AI development while ensuring the protection of individuals’ rights and safety in the digital age.