The United States, European Union, and the United Kingdom have taken a landmark step by signing the first legally binding international treaty on artificial intelligence (AI) during a Council of Europe conference held in Vilnius, Lithuania on Thursday.
This ground-breaking agreement, known as the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, marks a significant milestone in global efforts to regulate AI technologies, underlining the need to balance innovation with the protection of human rights and democratic values.
What does the treaty comprise?
This AI convention is the result of years of negotiations between 57 countries, including key AI developers like the US, UK, and EU, along with other participants such as Japan, Canada, Israel, Australia, and several others.
The treaty focuses on addressing potential risks posed by AI technologies while promoting responsible innovation. The Council of Europe, a Strasbourg-based international organisation established in 1949, spearheaded the initiative to ensure that AI systems align with fundamental human rights, democracy, and the rule of law.
Historic moment! The #CoE opens the first-ever legally binding global treaty on #AI and human rights.
— Council of Europe (@coe) September 5, 2024
Signed by EU 🇦🇩 🇬🇪 🇮🇸 🇳🇴🇲🇩🇸🇲 🇬🇧 🇮🇱 🇺🇸, this Framework Convention ensures AI aligns with our values.
#HumanRights #Innovation #Democracy #GlobalTreaty
As noted by the Council of Europe’s Secretary-General, Marija Pejčinović Burić, “We must ensure that the rise of AI upholds our standards, rather than undermining them.”
She said that the treaty is an “open and inclusive” document with a “potentially global reach,” encouraging more nations to sign and ratify the agreement. She also stated that the text provides a legal framework covering the entire lifecycle of AI systems, promoting innovation while managing risks related to human rights and democracy.
Why is the AI treaty a big deal?
The treaty arrives at a moment when AI is becoming increasingly embedded in various sectors, raising concerns about privacy, discrimination, and the potential misuse of AI systems.
“This Convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law,” remarked Shabana Mahmood, Britain’s justice minister.
Today in Vilnius, I joined the @coe to sign a historic AI convention ensuring new technology respects our oldest values - democracy, human rights, and the rule of law. We must shape AI, not let it shape us. pic.twitter.com/WOB6Js83aJ
— Shabana Mahmood MP (@ShabanaMahmood) September 5, 2024
The treaty is designed to ensure that AI’s rapid advancement does not erode the democratic principles that have guided the international community for decades.
One of the key aspects of the treaty is its requirement that signatories be held accountable for harmful or discriminatory outcomes resulting from AI systems. Furthermore, AI systems must respect privacy and equality rights, while those affected by AI-related human rights violations are entitled to legal recourse.
This creates an overarching framework for countries to monitor and regulate AI without stifling innovation, which is a concern voiced by many companies. Peter Kyle, the UK’s Minister for Science, Innovation, and Technology, underscored the importance of the treaty in establishing a “baseline that goes beyond just individual territories.”
What does the AI treaty secure?
In addition to the US, UK, and EU, a diverse group of countries signed the treaty, including Andorra, Georgia, Iceland, Norway, Moldova, San Marino, and Israel. Several non-member states, such as Argentina, Australia, Japan, and Mexico, were involved in drafting the treaty and are expected to sign soon.
This broad international participation highlights the treaty’s global significance, extending beyond the EU’s recent AI Act, which regulates AI within the bloc.
Also Watch:
The treaty’s requirements apply to both public and private-sector AI systems. AI developers must ensure their systems are consistent with obligations to protect human rights. Democratic processes, such as judicial independence and the separation of powers, must not be compromised by AI applications.
Moreover, the treaty mandates that measures be implemented to safeguard public debate and individual opinions in the context of AI-driven systems. As an open treaty, the document can accommodate more countries, encouraging global cooperation on AI governance.
What does the AI treaty not include?
Despite its ambitious goals, there are notable exemptions in the treaty’s scope. AI systems used for national security or research and development purposes are not subject to the same level of scrutiny as other sectors.
This has sparked concerns, particularly among civil society groups like the European Center for Not-for-Profit Law (ECNL). Francesca Fanucci, a legal expert with ECNL, told Reuters that the treaty had been “watered down” into a broad set of principles with overbroad caveats that limit its enforceability.
She added, “The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability."
What’s next?
Although hailed as a “legally binding” agreement, critics have pointed out that the treaty lacks provisions for punitive sanctions such as fines. Compliance will be ensured primarily through monitoring mechanisms, which some argue may not provide sufficient enforcement power.
Still, the treaty is viewed as a critical first step in creating a cohesive global approach to AI regulation. “It’s the first [agreement] with real teeth globally, and it’s bringing together a very disparate set of nations,” said Peter Kyle, expressing optimism about its potential impact.
The signing of the treaty also coincides with other global efforts to regulate AI. These include the EU’s AI Act, which came into effect last month, the G7 AI pact agreed in October 2023, and the Bletchley Declaration, signed by 28 countries, including China and the US, in November 2023.
The EU’s AI Act has been particularly controversial, with some companies, such as Meta, opting not to roll out their latest AI products in the region due to stringent regulations.
Vera Jourova, European Commission Vice President for Values and Transparency, expressed her optimism about the treaty’s broader influence: “I am very glad to see so many international partners ready to sign the convention on AI. The new framework sets important steps for the design, development, and use of AI applications, which should bring trust and reassurance that AI innovations are respectful of our values—protecting and promoting human rights, democracy, and the rule of law."
Also Watch:
With inputs from agencies


)

)
)
)
)
)
)
)
)
