The AI Act aims to address AI’s most pressing issues, including bias, privacy and misinformation.
Lawmakers in the EU today (13 March) approved the bloc’s landmark AI Act with an overwhelming majority, marking the next step in enforcing the world’s most extensive set of rules on AI.
The new legislation will affect how the rapidly growing technology is governed in the Western world, given the lack of any formal guardrails implemented by the US.
The AI Act aims to address AI’s most pressing issues: including bias, privacy and misinformation.
The legislation will determine different risk levels for public AI applications and systems. Those with the highest risk levels will be forced to meet strict requirements to operate.
One mandatory guideline for AI systems will be a human rights test that can measure whether a system is biased or discriminatory.
Restrictions will be enforced periodically to allow AI providers time to adjust. At least 15% of all AI systems will fall under the high-risk category.
The first set of restrictions will be imposed in GenAI systems like OpenAI’s ChatGPT and Google’s Gemini. Other restrictions may not be enforced until 2026.
“Europe is now a global standard-setter in trustworthy AI,” Thierry Breton, EU Internal Market Commissioner, said in a statement.
During Tuesday’s meeting, EU lawmakers confirmed that there was a lot of work left to do before the Act was up and running.
Dragos Tudorache, a European Parliament member, said: “The rules we have passed in this mandate to govern the digital domain — not just the AI Act — are truly historical, pioneering.
“But making them all work in harmony with the desired effect and turning Europe into the digital powerhouse of the future will be the test of our lifetime.”
How will the EU AI Act affect businesses?
Laura Petrone, principal analyst at research and analysis company GlobalData, told Verdict that the AI Act will be problematic for some EU businesses.
Petrone said many businesses will likely find the legislation “too tough” and say it “risks hampering innovation.”
The inclusion of restrictive rules on foundation models, like ChatGPT, means providers will be forced to follow thorough transparency requirements.
“Developers of general-purpose models labelled as a systemic risk will also have to put mitigation strategies in place and pass on details of any incident to the European Commission’s new AI Office, which is responsible for enforcing the rules,” Petrone added.
Petrone said it will become “critical but challenging” for the EU to distinguish between types of foundation models and those that may require additional regulatory scrutiny.
“There are still many uncertainties about how the legislation will be implemented and whether the AI Office possesses the necessary resources to accomplish its tasks,” Petrone said.
Curtis Wilson, staff data scientist at the Synopsys Software Integrity Group, told Verdict that regulatory frameworks like the EU AI Act will help build trust in AI, something he feels is the greatest problem facing AI developers.
“The strict rules and punishing fines will deter careless developers, and help consumers be more confident in trusting and using AI systems,” he said.
Source from Verdict
Disclaimer: The information set forth above is provided by verdict.co.uk independently of Alibaba.com. Alibaba.com makes no representation and warranties as to the quality and reliability of the seller and products.