security

AI rockets ahead in vacuum of U.S. regulation – Axios


A scale balancing binary code

Illustration: Lazaro Gamio/Axios

The overnight success of ChatGPT is kicking off a tech-industry race to bake AI into everyday products and decision-making with little oversight from government.

Why it matters: ChatGPT’s uncanny ability to spit out stories, articles and recipes is heating up AI awareness and concerns, yet there’s almost no effective U.S. regulation of the technology in place, raising fears it could promote bias, misinformation, fraud and hate.

What they’re saying: “We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future,” Rep. Ted Lieu (D-Calif.), who recommends creating a government agency to oversee AI, wrote in a New York Times op-ed last week.

U.S. lawmakers have been talking about AI’s promise and perils for many years. But as with previous waves of tech innovation, products’ speed-to-market has far outstripped the government’s readiness to regulate.

  • For every leader like Lieu that’s pushing for fast, strong AI rules, there’s another warning that premature regulation could stifle progress and limit American efforts to compete with China and other rivals.

State of play: “It’s a patchwork system [of AI regulation] in the United States,” with some laws around transparency and preventing discrimination from AI on the state level but only early moves at the federal level, Jessica Newman, who leads the AI Security Initiative at the UC Berkeley Center for Long-Term Cybersecurity, told Axios.

  • “I still think there’s a long way to go and I would love to see federal AI regulation that is more comprehensive,” Newman said.
Readers Also Like:  Royal Society calls on public sector to pilot privacy tech - ComputerWeekly.com

What’s happening: In Congress, lawmakers have proposed regulations on the use of facial recognition and other applications of AI.

  • The White House has an AI research office and has released a blueprint AI Bill of Rights.
  • The Federal Trade Commission, Equal Employment Opportunity Commission and other federal agencies have begun to float new rules on the use of AI.

Driving the news: This week, the National Institutes for Science and Technology, part of the Department of Commerce, put out a long-awaited AI framework, meant to give companies guidance on using, designing or deploying AI systems.

  • The framework should “accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all,” Deputy Commerce Secretary Don Graves said this week.

Yes, but: The framework is voluntary, and companies face no consequences for straying from it.

  • The hope is that eventual rules around AI will take lessons from the framework, Chandler Morse, vice president of corporate affairs at Workday, an enterprise cloud tech company, told Axios.
  • “It’s going to have a major impact in future conversations around AI governance and the regulatory landscape,” he said. “ChatGPT has sort of elevated the conversation… there’s a recognition of look, we’ve got to get as much paint on the canvas as we can.”

The other side: “The government should not be in the business of making very fine-grained laws or regulation because stuff just moves very, very rapidly,” Sridhar Ramaswamy, founder of ad-free search engine Neeva, which has its own generative text program for search results, told Axios.

Readers Also Like:  PriceSmart Announces Organizational Changes - PR Newswire

  • “I’m hard pressed to say that regulation is going to be helpful when it comes to AI in the near term… This is not to say that existing laws should not be applied to people using these models in unfair ways,” he said.

Meanwhile: Across the Atlantic, European Union regulators approved the Artificial Intelligence Regulation Act last December, with the European Parliament set to vote on it this spring and adoption by the end of 2023. The sweeping regulation will apply to companies outside of the EU as well, with fines for noncompliance of up to 30 million euro.

  • “The difference between Europe and the U.S. is that when Europe decides to regulate something, they can actually get it done,” Morse said.
  • The U.S. and the European Union Friday signed an agreement to collaborate on “responsible advancements” in AI.
  • China passed rules targeting algorithmic recommendations last March.

What to watch: It’s possible the rapid-fire adoption of ChatGPT will push regulators to move more quickly on AI rules, but such efforts face political hurdles and practical obstacles, since there are so many different uses of AI.

  • The Federal Trade Commission is in the process of creating new rules around commercial surveillance and data security that will govern any company that develops and deploys AI systems. Individual states such as Massachusetts are also mulling legislation, per CBS News.
  • With split party control of Congress, few expect a bipartisan breakthrough on a new federal AI law.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.