US, UK, EU Sign 'Legally Binding' AI Treaty 51
The United States, United Kingdom and European Union have signed the first "legally binding" international AI treaty on Thursday, the Council of Europe human rights organization said. Called the AI Convention, the treaty promotes responsible innovation and addresses the risks AI may pose. Reuters reports: The AI Convention mainly focuses on the protection of human rights of people affected by AI systems and is separate from the EU AI Act, which entered into force last month. The EU's AI Act entails comprehensive regulations on the development, deployment, and use of AI systems within the EU internal market. The Council of Europe, founded in 1949, is an international organization distinct from the EU with a mandate to safeguard human rights; 46 countries are members, including all the 27 EU member states. An ad hoc committee in 2019 started examining the feasibility of an AI framework convention and a Committee on Artificial Intelligence was formed in 2022 which drafted and negotiated the text. The signatories can choose to adopt or maintain legislative, administrative or other measures to give effect to the provisions.
Francesca Fanucci, a legal expert at ECNL (European Center for Not-for-Profit Law Stichting) who contributed to the treaty's drafting process alongside other civil society groups, told Reuters the agreement had been "watered down" into a broad set of principles. "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability," she said. Fanucci highlighted exemptions on AI systems used for national security purposes, and limited scrutiny of private companies versus the public sector, as flaws. "This double standard is disappointing," she added.
Francesca Fanucci, a legal expert at ECNL (European Center for Not-for-Profit Law Stichting) who contributed to the treaty's drafting process alongside other civil society groups, told Reuters the agreement had been "watered down" into a broad set of principles. "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability," she said. Fanucci highlighted exemptions on AI systems used for national security purposes, and limited scrutiny of private companies versus the public sector, as flaws. "This double standard is disappointing," she added.