Home Bots in Society New AI Act Rules to Take Effect in February 2025: Prohibiting Unacceptable-Risk AI Systems

New AI Act Rules to Take Effect in February 2025: Prohibiting Unacceptable-Risk AI Systems

by Marco van der Hoeven

Starting this month, new provisions of the European Union’s Artificial Intelligence Act (AI Act) will come into force, specifically targeting AI systems classified as posing an “unacceptable risk.” These measures aim to safeguard fundamental rights and ensure ethical standards in the development and deployment of AI technologies across member states.

Prohibited AI Systems

Under the AI Act, systems identified as presenting an unacceptable risk are now prohibited from being provided or deployed within the EU. The banned systems and applications include:

  1. Social Scoring Systems: AI used to evaluate individuals based on social behavior or personal characteristics, potentially leading to discriminatory outcomes.
  2. Predictive Policing: Systems designed to assess or predict the likelihood of individuals committing criminal acts, raising concerns about bias and civil liberties.
  3. Facial Recognition Databases: The creation or expansion of facial recognition databases through data scraping methods is strictly forbidden.
  4. Manipulative Technologies: AI systems intended to manipulate or mislead individuals, infringing on autonomy and freedom of choice.
  5. Emotion Recognition in Sensitive Contexts: The use of emotion recognition technologies in workplaces and educational institutions is banned due to concerns over privacy and psychological impacts.
  6. Remote Biometric Identification for Law Enforcement: While generally prohibited, certain exceptions apply under specific, regulated circumstances.

Exceptions and Clarifications

Despite these restrictions, the AI Act allows for certain exceptions, particularly concerning biometric categorization. In specific cases, AI systems can classify individuals into sensitive categories based on biometric data, provided strict ethical and legal guidelines are followed.

Organizations developing or deploying AI technologies within the EU must ensure compliance with these new regulations. Non-compliance could result in considerable penalties, emphasizing the need for robust governance and ethical oversight in AI projects.The introduction of these rules is part of the EU’s broader strategy to create a safe, transparent, and rights-respecting AI ecosystem.

 

Misschien vind je deze berichten ook interessant