EU AI Office: A first analysis
Digihumanism applauds the timely establishment of the EU AI Office but regrets the opacity that has surrounded it.
Human-centric AI governance is key to ensure proper application and enforcement of the EU AI Act. There is no real transparency on transversal coordination on fundamental rights protection within the Directorate‑General for Communications Networks, Content and Technology (DG CNECT) and with other DGs. In the meantime, Digihumanism carried out a first analysis of the EU AI Office by comparison with the previous structure of the EU Commission.
The EU AI Office results from a slight adaptation of the European Commission’s Directorate on AI and Digital Industry (CNECT A).
The question of the contribution from the Directorate-General for Justice and Consumers (DG JUST), in charge of fundamental rights, including data protection, has not been formally clarified.
The same goes for the establishment of a coordination platform with the Digital Services Act or the Digital Market Act governance and enforcement structures.
Digihumanism also deplores the absence of any Lead Fundamental Rights Advisor and recommends the establishment of an AI Office advisor with such portfolio.
Digihumanism calls for the recruitment of the Lead Scientific Advisor to be open and transparent. Proper implementation and enforcement of the EU AI Act requires the Lead Scientific Advisor to be independent, with no ties to either governments or the Tech industry in the past 5 years. The Lead Scientific Advisor should be well-versed in both technical and European fundamental rights matters. They should not only be a European citizen but they should have exercised their main professional activity in Europe for the past 5 years. (Vacancy notice, deadline for application: December 13, 2024, available here.
Digihumanism welcomed the organisation of the first European AI Office’s webinar (recording available here) with an interactive Q&A session on risk assessment and compliance requirements for both high-risk AI systems and general-purpose AI (GPAI) models with systemic risks. In both cases, prevention and mitigation of negative impact on fundamental rights are key. Digihumanism however regrets the lack of discussion on fundamental rights aspects. It is to be hoped that these aspects will duly be accounted for in the drafting of the GPAI Code of Practice and CEN-CENELEC standardisation process regarding high-risk AI systems.
We are looking forward to knowing more about the EU AI Office's, the GPAI Plenary Chairs and Vice-Chairs’, and CEN-CENELEC’s insights on the necessary safeguards to be operationalised in order to ensure fundamental rights protection.
🚦Fundamental rights protection is guaranteed through the EU Charter of Fundamental Rights and relevant EU legislation that ought to be respected. The OECD expert group on AI Incidents identifies fundamental rights violation as a characteristic defining both actual (AI incident) and potential (AI hazard) harms.
With regard to standards, Digihumanism is looking forward to knowing how the EU AI Office will implement the recent case law of the EU Court of Justice which ensures free access to harmonised standards (CJEU Grand Chamber, Case C588/21 P, 5 March 2024). Citizens' access to standards is crucial to enable them to verify
Whether a given product or service complies with the requirements of the EU AI Act;
Whether the standards comply with the requirements of the EU AI Act.
As a civil society organisation, Digihumanism is committed to make sure of it.