Thursday was a historic day for the European Union. It has passed the EU AI Act. A comprehensive framework to deal with the risks of artificial intelligence. It is a first for the bloc and a first for the world. Europe says it will enable innovation while safeguarding fundamental rights. It aims to make technology “more human-centric.”.
The idea is to regulate AI based on its capacity to harm society. The higher the risk, the stricter the rules. This act takes a much more detailed approach. It defines AI as ‘a machine-based system designed to operate with varying levels of autonomy’. That also includes chatbots like ChatGpt and Gemini.
The act ranks AI systems. They can be low-risk, mid-risk, or high-risk. High-risk systems include those deployed in banking, schools, or critical infrastructure. They have to be accurate. A human must be overlooking them. Plus, their usage has to be monitored. If they directly affect citizens, they have the right to question these systems.
Then there are systems that are prohibited. These are AI systems that can cause harm and affect the lives of people. For example, a social scoring system. Basically, systems classify people based on social behaviour or personality. China has one, but it is a strict no for the European Union.
Not all AI tools fall under this act. For example, AI tools designed for military, defence, or national security, or those designed for science and research, are also exempt. Facial recognition tools are allowed, but only for law enforcement.
The act aims to tackle deep fakes. If content is artificially generated or altered, it needs to be labelled, and people, companies, and bodies need to flag it.
Impact Shorts
More ShortsThen there is generative AI. It is the type of artificial intelligence that can create content—text, imagery, audio—pretty much anything like ChatGpt and DALL-E. The EU says they must meet specific requirements. They must cooperate with the copyright law. They must publish the data they are using to train these systems. Everything must be transparent.
There are fines for failures in compliance that range from 7.5 million euros to 35 million euros. There are fines for giving incorrect information to regulators, breaching provisions of the act, and developing or deploying banned tools.
It’s been a mixed response to this act. Tech companies have welcomed it, but they remain wary about the specifics. The Act will become law around May. They will then have around two years, as implementation will begin in 2025 only. So companies still have time.
While the EU is a trendsetter when it comes to this Act, the US mandates AI developers to share data with the government. China has also introduced a patchwork of AI laws. As the world realises the dangers of AI, other countries might now follow suit.
The views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost’s views.