Navigating the landscape of the EU’s voluntary AI initiatives
Now that the AI Act, originally proposed in April 2021, is on its way to be adopted by the end of 2023, one may think that the topic of artificial intelligence will incrementally cease to make the headlines in the EU. On the contrary, in the months to come after the adoption of the AI Act, it is likely that discussions in Brussels and in the EU’s Member States will keep revolving around AI, at least until the AI Act enters into force in 2026.
The European Commission is indeed expected to discuss at least three non-binding texts that will guide and inform European businesses and consumers regarding artificial intelligence in Europe: the AI Pact, the AI Code of Conduct, and the Code of Practice on Disinformation. Since it would be easy to get lost in this upcoming policy framework, we will delve into these voluntary initiatives on AI that will be key for startups, and we will try to shed light on their differences and similarities.
The AI Pact, carried out by Internal Market Commissioner Thierry Breton, aims at easing the implementation of the AI Act for AI companies operating in the EU. With this voluntary initiative, the European Commission is committed to addressing the most critical risks associated with rapidly evolving AI technology before the AI Act enters into force. The pact’s voluntary nature serves as a pragmatic response to the legislative void, offering an intermediate rulebook that provides ‘broad outlines’ of the AI Act. This approach acknowledges the urgency of addressing AI’s potential harms without waiting for formal regulation to become applicable. Although the AI Pact’s specifics remain to be fully revealed, its significance lies in its ability to serve as a prelude to comprehensive regulation. While its voluntary nature may limit its reach and effectiveness compared to binding legislation like the AI Act, the pact acknowledges the need for collaborative efforts between governments and industry players to ensure responsible AI development and deployment.
The AI Code of Conduct, championed by European Commission Executive Vice President Margrethe Vestager and developed in conjunction with the United States, takes on a different dimension compared to the AI Pact. Also announced in May 2023, Vestager’s vision however aligns with a more global perspective, advocating for a non-binding code that encompasses Western democracies beyond Europe (with the hope that other countries eventually join the process). The initiative also differs from Breton’s AI Pact as it plans to specifically address the challenges associated with generative AI systems, like ChatGPT, instead of focusing on all AI systems covered by the AI Act. While the exact scope remains unclear, the objective is to link this initiative to the G7 Hiroshima AI Process, which is a forum dedicated to generative AI, and to establish non-binding international standards on risk audits, transparency, and other requirements for companies developing generative AI systems. Japan should be presenting an interim report to G7 digital ministers in September 2023 that is expected to include a draft code of conduct. For now, the plan is to submit a final report to G7 leaders in November or December 2023.
The Code of Practice on Disinformation was initially established in 2018 and further strengthened in June 2022. This last voluntary initiative gathers a diverse range of online actors, including research groups, civil society organisations, and major platforms subject to new obligations under the Digital Services Act (DSA). This code aims to tackle the issue of disinformation and will transition into a code of conduct under the DSA once the legislation comes into force. The code focuses on promoting transparency, fact-checking, and responsible content moderation to combat the spread of dangerous disinformation content. Notably, the code also addresses the challenges posed by generative AI. As part of its efforts, the European Union aims to establish a dedicated track within the code to address the unique risks and implications associated with generative AI. This includes considering safeguards for services integrated with generative AI and content dissemination. The emphasis is on striking a balance between protecting freedom of speech and ensuring responsible use of AI-generated content.
The European Commission’s efforts to regulate and guide AI reflect a multifaceted approach encompassing binding legislations, voluntary pacts, and codes of conduct. While the AI Act and AI Pact aim to establish rules and principles for AI systems, the AI Code of Conduct and the Code of Practice on Disinformation address specific challenges in AI development and information dissemination. The growing collaboration between EU officials and industry leaders, including startups, highlights the importance of striking a balance between innovation and regulation in the AI domain. The collaborative efforts between the EU and the US in developing the AI Code of Conduct exemplify the global approach to shaping the responsible development of AI technologies. How these proposals guide the regulatory frameworks coming into force as we get closer to 2026 will be of critical importance to startups and we will follow this process closely to ensure the best possible outcomes for our members.