Sunday, November 17

Europe reaches a deal on the world’s first comprehensive AI rules

European Union negotiators clinched a deal today on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.
Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.
“Deal!” tweeted European Commissioner Thierry Breton, just before midnight. “The EU becomes the very first continent to set clear rules for the use of AI.”
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, Friday, Dec. 8, 2023, in Boston. Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce human-like work but raised fears about the risks they pose. (AP Photo/Michael Dwyer) (AP)
The result came after marathon closed-door talks this week, with the initial session lasting 22 hours before a second round kicked off Friday morning.
Officials were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more backroom lobbying.
The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rulebook in 2021.
The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.
The European Parliament will still need to vote on it early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.
“It’s very very good,” he said by text message after being asked if it included everything he wanted.
“Obviously we had to accept some compromises but overall very good.”
The eventual law wouldn’t fully take effect until 2025 at the earliest, and threatens stiff financial penalties for violations of up to 35 million euros ($57 million) or 7 per cent of a company’s global turnover.
File – OpenAI CEO Sam Altman, left, appears onstage with Microsoft CEO Satya Nadella at OpenAI’s first developer conference, on Nov. 6, 2023, in San Francisco. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. (AP Photo/Barbara Ortutay, File) (AP)
Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.
Now, the US, UK, China and global coalitions like the Group of 7 major democracies have jumped in with their own proposals to regulate AI, though they’re still catching up to Europe.
Strong and comprehensive regulation from the EU “can set a powerful example for many governments considering regulation,” said Anu Bradford, a Columbia Law School professor who’s an expert on EU and digital regulation
Other countries “may not copy every provision but will likely emulate many aspects of it.”
AI companies who will have to obey the EU’s rules will also likely extend some of those obligations to markets outside the continent, she said.
“After all, it is not efficient to re-train separate models for different markets,” she said.
Others are worried that the agreement was rushed through.
“Today’s political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing,” said Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group.
The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable.
But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google’s Bard chatbot.
A ChapGPT logo is seen on a monitor in West Chester, Pa., Wednesday, Dec. 6, 2023. Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAIs ChatGPT, which have dazzled the world with their ability to produce human-like work but raised fears about the risks they pose. (AP Photo/Matt Rourke) (AP)
Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI’s backer Microsoft.
Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet
They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.
Under the deal, the most advanced foundation models that pose the biggest “systemic risks” will get extra scrutiny, including requirements to disclose more information such as how much computing power was used to train the systems.
Researchers have warned that these powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.
Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.
What became the thorniest topic was AI-powered facial recognition surveillance systems, and negotiators found a compromise after intensive bargaining.
AI homeless man from 1920
These photos are made with AI, but you’d never know it
View Gallery
European lawmakers wanted a full ban on public use of facial scanning and other “remote biometric identification” systems because of privacy concerns while governments of member countries wanted exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.
Civil society groups were more skeptical.
“Whatever the victories may have been in these final negotiations, the fact remains that huge flaws will remain in this final text,” said Daniel Leufer, a senior policy analyst at the digital rights group Access Now.
Along with the law enforcement exemptions, he also cited a lack of protection for AI systems used in migration and border control, and “big gaps in the bans on the most dangerous AI systems”

Leave a Reply

Your email address will not be published. Required fields are marked *

On Cloud Sneakers | Lucchese Outlet | Oboz Canada | Freebird Boots | Born Shoes | Carolina Boots | Topo Shoes | Whites Boots | Crispi Boots | Dingo Boots | Barefoot Shoes Canada | LaCrosse Boots | Cuadra Boots | Lems Shoes | Anderson Bean Boots | Mizuno Shoes | Wesco Boots | Scarpa Boots | Brahmin Bags | Bueno Shoes | Hobo Handbags | HELM Boots | Dryshod Boots | HOKA Shoes | Beis Canada | Marc Jacobs Outlet | Keen Schuhe | Tecovas Outlet | Florsheim Boots | Vasque Boots | Babolat Shoes | Ferrini Boots |