security

E.U. Parliament approves landmark AI Act, challenging tech giants … – The Washington Post


European Union lawmakers on Wednesday took a key step toward passing landmark restrictions on the use of artificial intelligence, putting Brussels on a collision course with American tech giants funneling billions of dollars into the burgeoning technology.

The European Parliament overwhelmingly approved the E.U. AI Act, a sweeping package that aims to protect consumers from potentially dangerous applications of artificial intelligence. Government officials made the move amid concerns that recent advances in the technology could be used to nefarious ends, ushering in surveillance, algorithmically driven discrimination and prolific misinformation that could upend democracy. E.U. officials are moving much faster than their U.S. counterparts, where discussions about AI have dragged on in Congress despite apocalyptic warnings from even some industry officials.

The legislation takes a “risk-based approach,” introducing restrictions based on how dangerous lawmakers predict an AI application could be. It would ban tools that European lawmakers deem “unacceptable,” such as systems allowing law enforcement to predict criminal behavior using analytics. It would introduce new limits on technologies simply deemed “high risk,” such as tools that could sway voters to influence elections or recommendation algorithms, which suggest what posts, photos and videos people see on social networks.

The bill takes aim at the recent boom in generative AI, creating new obligations for applications such as ChatGPT that make text or images, often with humanlike flair. Companies would have to label AI-generated content to prevent AI from being abused to spread falsehoods. The legislation requires firms to publish summaries of what copyrighted data is used to train their tools, addressing concerns from publishers that corporations are profiting off materials scraped from their websites.

The threat posed by the legislation to such companies is so grave that OpenAI, the maker of ChatGPT, said it may be forced to pull out of Europe, depending on what is included in the final text. The European Parliament’s approval is a critical step in the legislative process, but the bill still awaits negotiations involving the European Council, whose membership largely consists of heads of state or government of E.U. countries. Officials say they hope to reach a final agreement by the end of the year.

OpenAI embraced regulation— until talks got serious in Europe

The vote cements the E.U.’s position as the de facto global leader on tech regulation, as other governments — including the U.S. Congress — are just beginning to grapple with the threat presented by AI. The legislation would add to an arsenal of regulatory tools that Europe adopted over the past five years targeting Silicon Valley companies, while similar efforts in the United States have languished. If adopted, the proposed rules are likely to influence policymakers around the world and usher in standards that could trickle down to all consumers, as companies shift their practices internationally to avoid a patchwork of policies.

Readers Also Like:  Basic Banking and Credit for New Americans (Part One) | The ... - California Department of Financial Protection and Innovation

“We have made history today,” co-rapporteur Brando Benifei, an Italian member of the European Parliament working on the AI Act, said in a news conference. Benifei said the lawmakers “set the way” for a dialogue with the rest of the world on building “responsible AI.”

The European Union for years has taken a tough line against American tech giants, bringing fines against companies that abuse their dominance and serving as a global laboratory for new forms of data privacy regulations. The 27-member bloc’s aggressive posture toward Silicon Valley was largely criticized by U.S. politicians during the Obama administration, who portrayed Brussels’s moves as an assault on American innovation. But a transatlantic alliance on tech regulation has developed in recent years, accelerating as the Biden administration seeks a harder line against tech giants’ alleged abuses.

The increasing alignment among regulators was evident in a separate announcement Wednesday, as Europe’s top antitrust regulator announced its preliminary finding that Google’s advertising technology business violated its competition laws, proposing a breakup of the company’s lucrative services. The European Commission alleges that Google’s grip on the high-tech tools that publishers, advertisers and brokers use to buy and sell digital advertising gives the company an unfair advantage over rivals. Brussels’s findings mirror the Biden Justice Department’s landmark antitrust lawsuit against the tech giant, which also seeks a divestment of this critical revenue driver.

Google has pushed back on the complaints, criticizing the European Commission’s filing as “not new.”

“It fails to recognize how advanced advertising technology helps merchants reach customers and grow their businesses — while lowering costs and expanding choices for consumers,” said Dan Taylor, Google’s vice president of global ads.

European policymakers and their counterparts are increasingly in communication about how to address the power of Silicon Valley giants. Dragos Tudorache, a Romanian member of the European Parliament who served as co-rapporteur on the AI legislation, said he has been talking to U.S. lawmakers about artificial intelligence for years. During a recent trip to Washington, he attended a private briefing with OpenAI chief executive Sam Altman and members of Congress, including House Speaker Kevin McCarthy (R-Calif), and he said he sensed a greater urgency on Capitol Hill to regulate AI.

“Something has changed now,” he said. “In the last six months, the impact of the rapid evolution of ChatGPT and large language models has really elevated the topic and [brought] these concerns for society up to the fore.”

Unlike lawmakers in the United States, the E.U. has spent years developing its AI legislation. The European Commission first released a proposal more than two years ago and has amended it in recent months to address recent advances in generative AI.

Readers Also Like:  Amid Changing Nature and Character of War, the Need for Tech-Oriented Military Commanders for India - Observer Research Foundation

The E.U.’s progress contrasts starkly with the picture in the U.S. Congress, where lawmakers are newly grappling with the technology’s potential risks. Senate Majority Leader Charles E. Schumer (D-N.Y.), who is leading bipartisan efforts to craft an AI framework, said lawmakers probably are months away from considering any legislation, telling The Washington Post they would “start looking at specific stuff in the fall.”

Schumer’s push is also motivated by national security concerns, as lawmakers warn that if the United States doesn’t act, its adversaries will. Schumer announced his plans for a legislative framework in April, after China unveiled its plans to regulate generative AI.

Meanwhile, the E.U. bill builds on scaffolding already in place, adding to European laws on data privacy, competition in the tech sector and the harms of social media. Already, those existing laws affect companies’ operations in Europe: Google planned to launch its chatbot Bard in the E.U. this week but had to postpone after receiving requests for privacy assessments from the Irish Data Protection Commission, which enforces Europe’s General Data Protection Regulation. Italy temporarily banned ChatGPT amid concerns over alleged violations of Europe’s data privacy rules.

CEO behind ChatGPT warns Congress AI could cause ‘harm to the world’

In the United States, Congress has not passed a federal online privacy bill or other comprehensive legislation regulating social media. On Tuesday, Schumer hosted the first of three private AI briefings for lawmakers. MIT professor Antonio Torralba, who specializes in computer vision and machine learning, briefed lawmakers on the current state of AI, covering tools’ uses and capabilities. The next session will look at the future of AI, and the third session, which will be classified, will cover how the military and the intelligence community use AI today.

Thirty-six Democrats and 26 Republicans attended the briefing, according to Gracie Kanigher, a spokeswoman for Schumer. Senators said the strong attendance signaled the deep interest in the topic on Capitol Hill and described the briefing as largely educational. Schumer told The Post that Congress has “a lot to learn.”

“It’s hard to get your arms around something that is so complicated and changing so quickly but so important,” he said.

American companies, including Microsoft, OpenAI and Google, are aggressively lobbying governments around the world, saying that they are in favor of new AI regulations. Since the beginning of the year, they have run a blitz calling for greater transparency around AI and responsible uses of such technology. Top technologists and academics, including Elon Musk, in March signed an open letter warning of “profound risks to society and humanity” and calling for a six-month pause in the development of AI language models.

But despite companies’ overtures supporting regulatory action, they have opposed aspects of the E.U.’s approach. Google, Microsoft and OpenAI declined to comment on Wednesday’s vote.

Readers Also Like:  Palo Alto acquires Israeli cyber company Talon for $625 million - CTech

Google, for instance, has repeatedly called for AI regulation in recent months, this week filing a proposal with the Commerce Department outlining ways to advance “trustworthy” AI. In that filing, the company took aim at the E.U.’s AI proposal, warning the provisions intended to create greater transparency come with significant trade-offs.

“In many contexts, AI source code is highly sensitive information and, where compelled, disclosure could both compromise trade secrets and create security vulnerabilities that could be exploited by criminals and foreign adversaries,” the company said.

Microsoft won over Washington. A new AI debate tests its president.

Several Democratic lawmakers said they are wary of once again falling behind Europe in setting rules of the road for technology.

“The United States should be the standard-setter. … We need to lead that debate globally, and I think we’re behind where the E.U. is,” said Sen. Michael F. Bennet (D-Colo.).

But Sen. Mike Rounds (R-S.D.), who is working with Schumer on AI, said he is less concerned about falling behind in setting new guardrails than he is about ensuring that the United States can stay ahead globally in developing tools such as generative AI.

“We’re not going to lose that lead, but what we do with legislation, our goal, is to make sure that we incentivize the creation of AI, allow it to grow more quickly than in other parts of the world … but also to protect the rights of individuals,” Rounds said after the briefing.

As Congress debates legislation, federal agencies including the Federal Trade Commission are weighing how they can move quickly to apply existing laws and regulations, especially those governing civil rights, to artificial intelligence systems — potentially outpacing Europe. If the E.U. adopts the AI Act, it probably will take at least two additional years to come into force.

Alex Engler, a fellow at the Brookings Institution studying AI policy, said the E.U. AI Act has the world’s attention. But he warned that no single law will solve the problems presented by AI.

“This is going to be decades of adaptation,” he said.

correction

A previous version of this article incorrectly reported that the bill would bar AI companies from publishing summaries of copyrighted data. In fact, the bill would require companies to publish summaries of such data. This version has been corrected.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.