AI prohibited practices: Digihumanism’s contribution to the European Commission’s consultation

Digihumanism answered the European Commission’s consultation on prohibited AI practices under the EU AI Act.

⚠️ The prohibitions of certain AI practices under the AI Act will enter into force in February 2025. Enforcement by competent authorities will start in August 2025.

Red lines were among the most debated provisions of the EU AI Act during their negotiation.

Most prohibitions are accompanied by exceptions. This makes the determination of what is or what is not prohibited tricky and European Commission’s guidelines on the matter necessary.

❓Is there a logic between red lines/exceptions?
Not necessarily. The outcome of the AI Act negotiations is more the result of bargaining between the EP which wanted to include red lines and the Council which tried to escape them by inserting exceptions (without caricaturing too much).

📍 The objective now is to make these red lines operational.

 Red lines concern:

 Article 5(1)(a) – Harmful subliminal, manipulative and deceptive techniques

Article 5(1)(b) – Harmful exploitation of vulnerabilities

Article 5(1)(c) – Unacceptable social scoring

Article 5(1)(d) – Individual crime risk assessment and prediction (with some exceptions)

Article 5(1)(e) – Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases

Article 5(1)(f) – Emotion recognition in the areas of workplace and education (with some exceptions)

 Article 5(1)(g) – Biometric categorisation to infer certain sensitive categories 

Article 5(1)(h) – Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes 

👉 In their submission to the European Commission, Digihumanism emphasised the following key elements:

🔶 Scope:
With a view to protecting against the most egregious violations of fundamental rights and since the delimitation of the red lines is the product of bargains more than logical choices backed by evidence, the scope of the red lines should be interpreted broadly

🔶 Burden of proof:
Due to the gravity of the risks these AI practices pose to fundamental rights, the presumption should be that these practices are prohibited. It is up to developers or deployers to prove that an exception should actually be applied.

🔶 Application of relevant EU law

It is not because a given practice is permissible under the AI Act that it is not prohibited under other relevant EU law, whether the EU Charter of fundamental rights or human rights/ data protection/ consumer/sectoral legislation.

The AI Act shall not contradict, and shall be interpreted in the light of, other relevant EU law.

👉 For our recommendations on specific prohibitions (eg causality, link between “harm” & violation of a fundamental right, definition of vulnerability, and many more), check out our submission available here.

 

 

Previous
Previous

EU AI Act prohibition on emotion recognition: Digihumanism provides detailed feedback to Dutch supervisory authority

Next
Next

GPAI Code of Practice: For a truly participatory drafting process: Collective letter to the European Parliament Joint Working Group on AI Act implementation