GPAI Code of Practice: For a truly participatory drafting process: Collective letter to the European Parliament Joint Working Group on AI Act implementation
Ahead of today’s meeting of the European Parliament (EP) Joint Working Group on the implementation and enforcement of the AI Act, Digihumanism, together with a group of CSOs and independent experts, wrote a LETTER to the Co-Chairs of the EP Working Group, MEP Benifei and MEP McNamara, with concrete suggestions to ensure that the drafting process of the General Purpose AI Code of Practice is truly participatory. The EP Working Group plays a key role in holding the European Commission / AI Office accountable for the proper implementation of the EU AI Act.
BACKGROUND
We contest the current narrative which consists in describing GPAI model providers as the main addressees of the Code of Practice, with the aim of legitimating their privileged position in the GPAI Code drafting process.
The GPAI Code of Practice will be the main instrument for the implementation of the EU AI Act with regard to general purpose AI models. The GPAI Code of Practice is not just a voluntary code:
—> It will confer GPAI model providers with a presumption of conformity with the EU AI Act.
—> By adopting an implementing act, the Commission may decide to approve the Code and give it a general validity within the Union.
Thus, the GPAI Code should not be resumed to the interests of GPAI model providers. It is absolutely necessary for the drafting process to ensure a level playing field among all the participants: industry, civil society organisations, rights holders, academics and independent experts, in order to ensure that the drafting of the Code is guided by the public interest.
Article 1.1 of the EU AI Act makes clear that:
“The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation. “
—> The participation of civil society organisations and rights holders is key to ensure the representation of the interests of those potentially negatively affected by AI systems and models with the aim to prevent and mitigate harmful effects.
—> Academics and independent experts are the best placed to give authoritative and objective scientific insights in order to ensure the uptake of human-centric and trustworthy AI.
The participation of all is essential to ensure a high level of protection of health, safety and fundamental rights.
CONCRETE SUGGESTIONS
In our letter, we commend the AI Office for trying to set the grounds for an inclusive drafting process. We also make some suggestions to ensure that this objective is reached. We thus call on the EP Working Group to:
👉 Request the AI Office to ensure the publication of all Working Group (WG) participants as well as non-anonymous Q&A with transparent speaker invitations for WG sessions.
👉 Ask the AI Office to clarify how consultation responses are taken into consideration, compared with GPAI model providers workshops, in the drafting process and request that the AI Office ensures transparency by providing proper feedback in this regard for each round of consultation (how, and to what extent, input has been taken into account and why certain suggestions have not been taken up).
👉 Request the AI Office to organize workshops with rightsholders, civil society organizations and academics, ahead of the Working Group (WG) sessions.
👉 Inquire why restrictions were imposed on the functioning and participatory features of the online WG sessions and urge the AI Office and Chairs and Vice-Chairs to organize more WG sessions in which debates and discussions with all participants on specific provisions can occur.
👉 Inquire whether an interactive tool that can weigh votes based on participants’ registered category could be used during the WG sessions.
👉 Remind the AI Office that sufficient time should be granted to participants to request speaking time during the WG sessions based on the actual content of the Code. The relevant version of the Code should be distributed at least 2 weeks before the deadline for speaking time requests.
👉 Recommend for at least one week to be left between the deadline for submitting questions and the deadline for upvoting them.
👉 Remind the AI Office of the importance of respecting the tenets of the Better Regulation Agenda and for more time and more space to be allocated for responding to consultations. Participants should have at least 3 weeks, to answer consultations in a meaningful way and with evidence to support their positions. Four weeks taking into consideration the Christmas break.
👉 Ask the AI Office to publish confirmed dates for WG sessions through April 2025.
👉 Ask the AI Office whether and how they monitor that the Chairs and Vice- Chairs implement this basic obligation through the drafting of the taxonomy of risks GPAI models pose and the related risk management system to prevent and mitigate them. It is important for the AI Office to remind the Chairs and Vice-Chairs of the objectives of the AI Act, and thus of the Code.
👉 Urge the AI Office to send participants a list of confirmed WG dates through April 2025.
Full letter available here.