1st EU GPAI Code of Practice: Call for transparency, meaningful public participation and respect for the AI Act obligation to prevent and mitigate risks to fundamental rights

On November 14, the AI Office published the first draft of the EU GPAI Code of Practice. The AI Office also published Qs&As on GPAI Models in the EU AI Act.

The first draft was prepared by independent experts, appointed as Chairs and Vice-Chairs of four thematic working groups (WGs).

The experts developed this initial version based on contributions from of a multi-stakeholder consultation which took place last Summer and a first workshop organised with GPAI model providers ahead of working groups meetings. The drafting also took into account international approaches.

4 thematic WG sessions will take place next week. Registered participants will have until Nov. 28 to send feedback on the first draft of the Code.

📍FROM A SUBSTANTIVE STANDPOINT

🔔 Taking into account international approaches should be at the expense of neither the EU AI Act nor the EU law acquis. The EU GPAI Code is aimed to operationalise them.

Let’s recall that the GPAI Code of Practice is not just a voluntary code:

👉 The GPAI Code of Practice will confer GPAI models providers with a presumption of conformity with the relevant legal obligations of the EU AI Act.

👉 By adopting an implementing act, the Commission may decide to approve the Code and give it a general validity within the Union.

In a recently published JRC paper on harmonized standards for the EU AI Act, Josep Soler Garrido, Sarah De Nigris, Elias Bassani, Ignacio Sanchez, Tatjana Evas, Antoine-Alexandre André and Thierry Boulangé, members of the AI Office, explained:

Among the key aims of the AI act are to ensure that AI systems respect the safety, health and fundamental rights of individuals, and to address the risks of very powerful AI models.

Tailored to the objectives of the AI Act: Standards must specifically address and prioritise the risks that AI could pose to the health, safety and fundamental rights of individuals. However, existing international standardisation efforts tend to focus on protecting the objectives of organisations using AI. There are fundamental differences between managing risks to organisational objectives and addressing possible risks of AI systems to individuals. The latter should be the focus of standards supporting the implementation of the AI Act.

The obligation to prevent and mitigate risks to fundamental rights is at the heart of the EU AI Act for both AI systems and GPAI models. Digihumanism is looking forward to the Chairs and Vice-Chairs implementing this basic obligation through the drafting of the taxonomy of risks GPAI models pose and the related risk management system to prevent and mitigate them.

📍FROM A PROCEDURAL STANDPOINT

Digihumanism congratulates the AI Office for making the first draft publicly available and for allowing participants to comment on the drafting process outside of the WGs themselves as long as the Chatham House Rule is applied.

However, Digihumanism calls on the AI Office to remedy the following shortcomings in order to ensure meaningful public participation, a level playing field among the participants and transparency in the drafting process:

1/ Art. 1.1 EU AI Act makes clear that the purpose of the regulation is to prevent and mitigate risks for affected people, especially as far as their fundamental rights are concerned. However, only GPAI model providers benefit from participation in workshops with WG Chairs and Vice-Chairs ahead of the WG sessions.

There is also lack of experts in EU data protection law and fundamental rights law among the Chairs and Vice-Chairs. This makes it more difficult for the Code to take into account the existing acquis in these fields.

-> Digihumanism calls for workshops with rightsholders, civil society organisations and academics to be organised, ahead of the WG sessions.

2/ Most of the Working Group sessions were dedicated to a series of 3mn monologues, without any time for discussion and the chat was deactivated. Although speaking times are useful to understand various stakeholders’ positions, this does not really make the drafting process more transparent and does not give any indication regarding the way in which insights would be taken into consideration.

-> Digihumanism urges the AI Office and Chairs and Vice-Chairs to organise more sessions in which debates and discussions on specific provisions can occur in order for the drafting process to be truly informed by participants’ insights.

3/ Civil society representatives are outnumbered in the WGs. Surveys via Slido do not distinguish among categories of participants. There is thus a lack of representativeness of the responses collected.

-> Digihumanism recommends to use a tool which can weigh votes based on participants’ registered category.

4/ Deadline for requesting speaking time during the WG sessions ended before the drafting Code was made available to the participants. It was a lost opportunity for many.

-> Digihumanism requests for the relevant version of the Code to be distributed at least 2 weeks before the deadline for speaking time requests.

5/ The deadline for submitting questions and the deadline for upvoting questions so that the Chairs and Vice-Chairs would select and address them during the WG sessions were the same.

-> Digihumanism recommends for at least one week to be left between the deadline for submitting a question and the deadline for upvoting a question.

6/ Answers to the questions in the Consultation on the 1st draft are very limited in size (500 characters for some questions and 2000 for others). Less than one week is left between the WG sessions and the deadline to answer the consultation. This does not allow to provide meaningful feedback on the Code.

-> Digihumanism requests for more time and more space to be allocated for responding to the consultation. Participants should have at least 3 weeks to answer the consultation in a meaningful way and with evidence to support their positions.

Previous
Previous

Digihumanism unveils massive astroturfing on TikTok biaising the Romanian presidential elections

Next
Next

Report on first meeting of the EP Working Group to Oversee Implementation of the AI Act