The EU AI Act, effective from 1 August 2024, introduces comprehensive regulations for artificial intelligence across Europe. This framework promotes responsible AI development while safeguarding public health, safety, and fundamental rights. The Act seeks to protect rights, democracy, the rule of law, and environmental sustainability from high-risk AI while also boosting innovation and positioning Europe as a leader in the field.
This guide provides organisations with key insights into the EU’s latest Act and practical steps, best practices, and essential dos and don’ts for ensuring data compliance.
Understanding Risk Levels Under the EU AI Act
The EU AI Act categorises AI systems into four risk levels, each with specific requirements:
1. Minimal Risk: Systems like spam filters fall under minimal risk and have no legal obligations but may follow voluntary guidelines to enhance data protection and ethical use.
2. No Specific Transparency Risk: Applications like chatbots must clearly disclose that users interact with AI. AI-generated content should also be clearly labelled to ensure transparency and data compliance.
3. High-Risk: AI systems used in sensitive areas like healthcare, education, and recruitment are classified as high-risk. These require strict compliance measures, including risk assessments, transparency, and human oversight. High-risk categories also cover critical infrastructure, essential services, migration, border management, and justice processes, all demanding rigorous data protection protocols.
4. Unacceptable Risk: Certain AI applications, such as biometric categorisation based on sensitive characteristics, emotion recognition in workplaces and schools, and social scoring, are banned due to their potential to violate fundamental rights and personal freedoms under the EU AI Act.
Who Must Comply with the EU AI Act and Why?
The EU AI Act applies to any organisation that develops, deploys, or uses AI systems within the European Union, regardless of their location. This includes:
- AI Providers: Companies developing AI systems.
- AI Users: Entities deploying AI systems in their operations.
- AI Importers and Distributors: Those involved in trading and distributing AI systems within the EU.
The Act ensures that AI technologies respect fundamental rights, meet high standards of safety and transparency, and adhere to strict data compliance regulations, thereby protecting EU citizens and promoting ethical AI practices.
Enforcement Timeline and Fines Under the EU AI Act
Enforcement Timeline: The EU AI Act became enforceable on 1 August 2024. Key milestones include:
- Six Months: Ban on prohibited practices.
- Nine Months: Implementation of codes of practice.
- 12 Months: Establishing rules for general-purpose AI, including governance structures.
- 24 Months: Full applicability, with some exceptions.
- 36 Months: Obligations for high-risk AI systems come into full effect.
Potential Fines for Non-Compliance: Non-compliance with the EU AI Act can result in significant fines, up to €30 million or 7% of global annual turnover, whichever is higher. This stringent fine structure highlights the importance of following the Act’s regulations.
Banned Applications and Law Enforcement Exemptions
The EU AI Act bans AI applications that pose a threat to citizens’ rights, including:
- Biometric categorisation based on sensitive characteristics.
- Untargeted scraping of facial images for recognition databases.
- Emotional recognition in workplaces and schools.
- Social scoring and predictive policing based solely on profiling.
- AI systems designed to manipulate behaviour or exploit vulnerabilities.
Law Enforcement Exemptions
While the use of biometric identification systems (RBI) by law enforcement is generally banned, there are exceptions in specific situations. “Real-time” RBI is permitted only with strict safeguards and specific authorisation, such as in searches for missing persons or to prevent terrorist attacks. Post-facto RBI requires judicial authorisation and must be linked to criminal offences.
Best Practices for Data Compliance Under the EU AI Act
Do’s
- Conduct Regular Audits: Frequently review your AI systems to ensure they are classified correctly and comply with the latest Act.
- Implement Training Programmes: Educate employees on the EU AI Act, focusing on their responsibilities and the impact on their roles.
- Ensure Transparency: Make it clear when users interact with AI, and ensure all AI-generated content is properly labelled. General-purpose AI systems must also meet transparency requirements, including publishing summaries of training content and labelling deepfakes.
- Maintain Documentation: Keep comprehensive records of compliance efforts, including risk assessments, audits, and training activities to ensure data protection.
- Consult with Experts: Engage with legal and compliance professionals to fully understand and meet the requirements of the Act.
Don’ts
- Neglect Risk Assessments: Skipping thorough risk assessments can lead to non-compliance and significant penalties.
- Disregard User Rights: Always respect users’ rights concerning AI systems, such as the right to be informed and to challenge automated decisions.
- Rush Implementation: Take the time to fully understand the implications of the Act before implementing changes to your AI systems.
- Isolate Your Efforts: Collaborate with industry experts and regulatory bodies to stay informed about best practices and regulatory changes.
Areas Where Organisations Might Need Support for Data Compliance
- Risk Assessment and Classification: Evaluating and classifying AI systems according to their risk levels.
- Compliance Documentation: Keeping detailed records of compliance efforts and changes.
- Training and Awareness: Educating employees about the Act and their specific responsibilities.
- Legal and Strategic Guidance: Seeking expert advice on navigating regulatory requirements and strategic compliance.
- Innovative Testing: Using regulatory sandboxes and real-world testing to develop and refine AI before market placement.
Conclusion
The EU AI Act represents a major shift in the regulation of artificial intelligence in Europe. By understanding its key requirements, following best practices, and implementing effective compliance measures, organisations can successfully navigate this new regulatory landscape and ensure robust data compliance.
For further assistance with compliance and strategic guidance, please reach out to our Data Protection Officer (DPO) Lynsey Hanson lynsey.hanson@tenintel.com or email dpo@tenintel.com . Get tailored advice on how to adhere to the EU AI Act and maintain high standards of data protection.
Written by
Lynsey Hanson