security

E.U. Takes Major Step Toward Regulating A.I. – The New York Times


The European Union took an important step on Wednesday toward passing what would be one of the first major laws to regulate artificial intelligence, a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly developing technology.

The European Parliament, a main legislative branch of the E.U., passed a draft law known as the A.I. Act, which would put new restrictions on what are seen as the technology’s riskiest uses. It would severely curtail uses of facial recognition software, while requiring makers of A.I. systems like the ChatGPT chatbot to disclose more about the data used to create their programs.

The vote is one step in a longer process. A final version of the law is not expected to be passed until later this year.

The European Union is further along than the United States and other large Western governments in regulating A.I. The 27-nation bloc has debated the topic for more than two years, and the issue took on new urgency after last year’s release of ChatGPT, which intensified concerns about the technology’s potential effects on employment and society.

Policymakers everywhere from Washington to Beijing are now racing to control an evolving technology that is alarming even some of its earliest creators. In the United States, the White House has released policy ideas that includes rules for testing A.I. systems before they are publicly available and protecting privacy rights. In China, draft rules unveiled in April would require makers of chatbots to adhere to the country’s strict censorship rules. Beijing is also taking more control over the ways makers of A.I. systems use data.

Readers Also Like:  Wasabi Goes All-In On Surveillance With New Cloud Storage Tech - CRN

How effective any regulation of A.I. can be is unclear. In a sign of the technology’s new capabilities emerging seemingly faster than lawmakers are able to address, earlier versions of the E.U. law did not give much attention to so-called generative A.I. systems like ChatGPT, which can produce text, images and video in response to prompts.

In the latest version of Europe’s bill passed on Wednesday, generative A.I. would face new transparency requirements. That includes publishing summaries of copyrighted material used for training the system, a proposal supported by the publishing industry but opposed by tech developers as technically infeasible. Makers of generative A.I. systems would also have to put safeguards in place to prevent them from generating illegal content.

Francine Bennett, acting director of the Ada Lovelace Institute, an organization in London that has pushed for new A.I. laws, said the E.U. proposal was an “important landmark.”

“Fast-moving and rapidly repurposable technology is of course hard to regulate, when not even the companies building the technology are completely clear on how things will play out,” Ms. Bennett said. “But it would definitely be worse for us all to continue operating with no adequate regulation at all.”

The E.U.’s bill takes a “risk-based” approach to regulating A.I., focusing on applications with the greatest potential for human harm. This would include where A.I. systems are used to operate critical infrastructure like water or energy, in the legal system, and when determining access to public services and government benefits. Makers of the technology will have to conduct risk assessments before putting the tech into everyday use, akin to the drug approval process.

Readers Also Like:  Lumen Technologies sets second quarter 2023 earnings call date - Lumen Newsroom

A tech industry group, the Computer & Communications Industry Association, said the European Union should avoid overly broad regulations that inhibit innovation.

“The E.U. is set to become a leader in regulating artificial intelligence, but whether it will lead on A.I. innovation still remains to be seen,” said Boniface de Champris, the group’s Europe policy manager. “Europe’s new A.I. rules need to effectively address clearly-defined risks, while leaving enough flexibility for developers to deliver useful A.I. applications to the benefit of all Europeans.”

One major area of debate is the use of facial recognition. The European Parliament voted to ban uses of live facial recognition, but questions remain about whether exemptions should be allowed for national security and other law enforcement purposes.

Another provision would ban companies from scraping biometric data from social media to build out databases, a practice that drew scrutiny after it was used by the facial-recognition company Clearview AI.

Tech leaders have been trying influence the debate. Sam Altman, the chief executive of OpenAI, the maker of ChatGPT, has in recent months visited with at least 100 American lawmakers and other global policymakers in South America, Europe, Africa and Asia, including Ursula von der Leyen, president of the European Commission. Mr. Altman has called for regulation of A.I., but has also said the E.U.’s proposal may be prohibitively difficult to comply with.

After the vote on Wednesday, a final version of the law will be negotiated between representatives of the three branches of the European Union — the European Parliament, European Commission and the Council of the European Union. Officials said they hope to reach a final agreement by the end of the year.

Readers Also Like:  Lunate and BNY Mellon to Invest in New Wealth Technology Company - PR Newswire



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.