Digihumanism provides expert feedback to the Dutch Department for the Coordination of Algorithmic Oversight on AI Act prohibited practices

The Department for the Coordination of Algorithmic Oversight (DCA) of the Dutch Data Protection Authority (AP) called for input on the first prohibitions in the AI Act: manipulative, deceptive and exploitative AI systems.

Digihumanism would like to congratulate the DCA for its pro-active stance in drafting guidance ahead of the February 2025 deadline regarding the entry into force of the AI Act provisions on AI prohibited practices. Providers and deployers of general-purpose AI models and AI systems have the obligation to prevent or stop the development and deployment of harmful categories of AI. The timely implementation and enforcement of red lines is crucial to avoid unacceptable risks to materialise.

Digihumanism also welcomes the DCA’s commitment to ensure meaningful public participation in the drafting of these guidelines. AI prohibited practices concern the most harmful categories of AI to individuals, vulnerable groups and society. It is essential that those affected by these practices and civil society organisations are consulted in order to ensure that the guidance effectively addresses unacceptable AI risks and prevent their transformation into real damages on human lives.

The DCA asked for concrete examples of harmful manipulative, deceptive and exploitative AI systems based on the criteria established in Art. 5.1 (a) and (b) of the EU AI Act. In response, Digihumanism gathered and shared more than 20 pages of substantial evidence. We remain at the DCA’s disposal to discuss further concrete wording for the upcoming guidelines to capture properly these unacceptable AI practices.

In the meantime, Digihumanism would like to share the following recommendations:

👉 Harms triggered by manipulative, deceptive and AI practices do not belong to Sci-Fi or a distant future. They are happening now. There is an urgent need to put a stop to these practices and prevent them from developing any further. The DCA’s guidelines should set strong and future-proof red lines. They should be as wide as possible to avoid fundamental rights violations.

👉 The propensity of AI practices to mislead and exploit human beings in ways that undermine autonomy and informed consent or take advantage of their vulnerabilities necessitates robust regulatory measures to ensure respect of the ban on these practices, informational and algorithmic transparency and user trust in AI reliability.

👉 Addressing these unacceptable risks requires strong enforcement mechanisms that hold the developers and deployers of these harmful AI systems, including GPAI models, accountable.

Independent public AI oversight institutions such as the DCA should be empowered with the necessary financial and human ressources in order to fulfill their mission effectively and ensure that AI remains a tool for empowerment rather than exploitation.

To read Digihumanism’s full submission please click here.

Previous
Previous

After Trump’s election, EU Commissioner-designates’ stress the importance of a switch from mere international interoperability to value alignment

Next
Next

The Council of Europe Convention on AI: Next steps