Digihumanism Contributes to the General Purpose AI (GPAI) Code of Practice
The Centre for AI and Digital Humanism (Digihumanism) is looking forward to serving as a hub for active and maximised contribution of like-minded partners in the drafting of the General Purpose AI (GPAI) Code of Practice. This significant step forward highlights Digihumanism’s commitment to shaping the future of AI governance in alignment with the European Union’s foundational values. Only together can we influence the process and make a meaningful and impactful contribution.
Karine Caunes, Digihumanism Executive Director, expressed: “Digihumanism will have at heart to contribute to the humanistic implementation of the EU AI Act, rooted in European principles: fundamental rights protection, democracy, and the rule of law, and in collaboration with like-minded partners.”
Key Focus Areas:
Balanced Stakeholder Participation: The EU AI Act mandates a diverse group of stakeholders, from civil society to Tech industry, to ensure balanced input in AI regulation.
Digihumanism will monitor and contribute to ensure meaningful public participation.
👉 Digihumanism created and coordinates a platform for exchange with already more than 30 like-minded partners, to devise strategies, enrich each other contributions and draft common positions. The platform is composed of civil society organisations, independent experts and academics.
👉 Mission Occupy the Field: Coordinated participation through speaking slots and questions in the four GPAI Working Group sessions. Thanks to early action, Digihumanism’s GPAI hub members managed to secure at least one third of the speaking slots in all Working Group sessions and got several to their questions addressed by the Working Group Chairs and Vice-Chairs. This strategy has allowed to put the humanistic agenda front and center in all discussions.
Addressing Systemic Risks to Fundamental Rights: At the center of the GPAI Code of Practice are the identification, assessment, prevention and mitigation of systemic risks.
Based on the first draft of the Code of Practice, its focal point is the taxonomy of risks. The capacity of the Code to address risks for human beings relies primarily on this taxonomy.
👉 Digihumanism will prioritise the elaboration of a comprehensive taxonomy focused on fundamental rights protection. Risks to fundamental rights should be at the heart of the taxonomy. This is what distinguishes the European approach from international methodologies. It would set a positive example for other key initiatives on risk management such as the EU standardisation process for high-risk AI systems (Art. 9 EU AI Act), the fundamental rights impact assessment (Art. 27 EU AI Act) or the huderia methodology in support of the Council of Europe’s Framework on Convention on AI, human rights, democracy and the rule of law. Risk management should become synonymous to human rights impact assessment in practice.
Potential EU Law: By means of an implementing Act, the European Commission may decide to approve the Code. This would make it legally binding across the EU. The first draft of the GPAI Code of Practice overlooks this by focusing on signatories. This is supported by a process which attributes a privileged position to GPAI model providers through the organisation of workshops with Chairs and Vice-Chairs of the GPAI Working Groups only for them. The purpose of the AI Act is to “ensur[e] a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union.” (Art. 1 EU AI Act). EU law applies to all and is supposed to represent and protect the general interest. From a procedural standpoint, to give a privileged position to GPAI model providers is not conducive to ensuring the general interest. From a substantive standpoint, to focus on signatories is subtly tipping the process to an organisation-focused perspective instead of a rights-focused perspective. This could have a detrimental impact on the content of the Code and its propensity to implement correctly the EU AI Act, in line with its purpose.
👉 Digihumanism will push against the pro-GPAI providers narrative which currently permeates the Code and which legitimises the attribution of a privileged position to GPAI providers.
A Call for Equal Representation
Digihumanism emphasises the importance of ensuring that civil society, academics, indepent experts, industry participants, and GPAI providers all have an equal voice in this process. This principle of equal representation is vital to building a democratic and transparent framework for AI governance.
Stay tuned for updates as Digihumanism continues to advocate for a future where AI development respects fundamental rights. This is a sine qua non condition for fostering a digital ecosystem that works for all.