EU’s AI Act sparks backlash as SMEs struggle with 2030 compliance deadlines
The EU's AI Act is facing criticism and concern as the Federal Network Agency is proposed to oversee AI supervision, raising questions about fundamental rights protection. Long transition periods for authorities have sparked debate, with digital politician Anke Domscheit-Berg questioning the exception made for authorities with high demands on fundamental rights observance.
The EU AI Act categorizes AI systems based on risk, with prohibited systems including those using manipulative techniques or biometric data for categorizing people. High-risk systems, like biometric identification or employee evaluation tools, face strict requirements. However, many small and medium-sized enterprises (SMEs) in Germany struggle to meet these requirements by 2030, due to a lack of skilled workers, data protection concerns, and high costs. Despite government support and competency initiatives, only a fifth of AI-involved employees have received training since the obligation took effect on February 1, 2025. Many companies continue to ignore the employee training requirement, despite the new EU regulation.
The EU AI Act's implementation faces challenges, with concerns about fundamental rights protection and many companies failing to comply with employee training obligations. Small and medium-sized enterprises in Germany, in particular, face hurdles in meeting the act's requirements by 2030.
Read also:
- American teenagers taking up farming roles previously filled by immigrants, a concept revisited from 1965's labor market shift.
- Weekly affairs in the German Federal Parliament (Bundestag)
- Landslide claims seven lives, injures six individuals while they work to restore a water channel in the northern region of Pakistan
- Escalating conflict in Sudan has prompted the United Nations to announce a critical gender crisis, highlighting the disproportionate impact of the ongoing violence on women and girls.