security

U.S. efforts to regulate A.I. gather steam – Fortune


Hello, and welcome to Eye on A.I. Last week saw further evidence that the effort to regulate A.I. in the U.S., which has lagged behind Europe and other places, including China, is now gathering momentum.

Senate Majority Leader Chuck Schumer (D-N.Y.), in an address in Washington, unveiled what he called his “SAFE Innovation Framework for A.I. Policy.” SAFE is an acronym that stands for security, accountability, protecting our foundations (the F in SAFE), and explainability.

Security, Schumer said, meant security from the threat that rogue actors or hostile nations would use A.I. for “extortionist financial gain or political upheaval.” He also said it meant security for the American workforce, which might see significant job losses, particularly among the already hard-hit middle class, from the widespread deployment of generative A.I.

Accountability, he said, meant preventing the companies creating and using A.I. from doing so in ways that unfairly exploit individuals and consumers and erode creators’ intellectual property rights. He said that A.I. must be developed in ways that reinforce, rather than undermine American values—that is the foundations bit. And he said that enabling users to understand why an A.I. system arrived at any particular output was essential. Schumer rightly called explainability “perhaps the greatest challenge we face,” since computer scientists haven’t come up with great ways of unpacking the factors that weigh most heavily in the decision-making of deep learning algorithms, which underpin most of the current enthusiasm for A.I.

Finally, Schumer announced his intention to hold a series of “A.I. Insight Forums” on Capitol Hill this fall that will differ in format from traditional Congressional hearings, with the idea being to hear from experts on how best to tackle a range of issues around A.I. policy.

That all sounds good, but some, such as Ben Winters, senior counsel at the Electronic Privacy Information Center, questioned whether Schumer’s call for a new round of Insight Forums risked duplicating or negating the nascent efforts Congress and other parts of the U.S. government have already made over the past year on A.I. policy. Winters told the New York Times that Schumer’s approach was “frustrating and disappointing” and expressed concerns that “other stronger, more protective A.I. laws may get sidelined or delayed as the process plays out.”

Others questioned whether Schumer’s insistence that “innovation must be our North Star” could be used to justify a lighter touch regulatory regime than many critics of Big Tech think is warranted. Meanwhile, some A.I. experts pointed out that “explainability” may be too high a bar, technically, for many A.I. systems that are nonetheless useful, and also that the term itself was highly subjective, as different levels of interpretability might be called for depending on whether the information is being provided to a consumer, an A.I. developer, a company deploying A.I., or a regulator.

Jim Steyer, the CEO of Common Sense Media, a nonprofit that advocates for media and technology literacy and accountability for companies building tech, told me he was skeptical the federal government would act in a timely manner. “They can’t even pass privacy legislation, so what makes you think they are going to ever pass any kind of meaningful A.I. legislation?” he said.

Steyer was among a group of technology experts and civil society groups concerned about A.I. that met with President Joe Biden in San Francisco last week. This is the first time the President had held a lengthy meeting himself with A.I. experts. He had previously “dropped by” a meeting Vice President Kamala Harris held with executives from top A.I. companies at the White House in May.

Steyer characterized Biden as “really well-prepped” and “really engaged” and said that the President asked “really thoughtful questions.” “He was really focused on the implications for democracy,” Steyer said.

Readers Also Like:  Source: Dutch, Japanese join US limits on chip tech to China - The Associated Press - en Español

(The Federal Election Commission—an independent regulatory agency—just missed a chance to take stronger action against deepfake campaign ads, with its 12-person board split down the middle on a proposal to designate A.I.-made advertising “fraudulent misrepresentation of campaign authority.”)

There was a general consensus among the participants that “this is massive and has to be regulated now,” Steyer said. A.I. will have more impact potentially than even social media has had over the past two decades, Steyer believes, so the government has to be careful not to repeat the same mistakes it made with social media. That means ensuring that arguments about stifling innovation don’t outweigh sensible regulation to protect democratic values and processes, privacy rights, and people’s mental health. He said he tried to impress upon the President that the government needed to move quickly on A.I., which he said Biden seemed to understand.

Common Sense is planning some big initiatives to help educators and families better understand the strengths and risks of A.I. technologies such as chatbots, a move that may ultimately help pressure companies building generative A.I. software to ensure it is safer, Steyer told me. But he’s keeping the details under wraps pending a big announcement in the coming days.

There is also likely to be a push, which Common Sense will help champion, to introduce A.I. regulation at the state level in California. This mirrors steps Common Sense and other civil society groups took in 2018 to push for passage of a data privacy law in California, hoping that would become, absent any federal action, the de facto “law of the land,” Steyer says. The logic is that tech companies don’t want to lose access to a market as large as California and so will modify their products to meet the state law, enabling people in other states to benefit from the increased protection too. This is similar to the leverage the European Union has applied on issues such as data privacy, often setting the standard for the rest of the world when it comes to regulating Big Tech.

What’s clear is that with a new push on A.I. legislation in California, multiple efforts on Capitol Hill and within the executive branch, final negotiations within the EU over its A.I. Act, and a summit in London on international governance of A.I., it promises to be a decisive autumn for A.I. regulation.

Stay tuned to Eye on A.I. as we continue to monitor this and other A.I. developments. With that, here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

German newspaper Bild plans to replace a range of editorial roles with A.I. The tabloid plans to replace a range of jobs with A.I., as part of a €100 million cost-cutting program that will involve about 200 layoffs, the Guardian reported. The cuts will include editors, photo editors, and print production staff. A.I. tools will be used to perform some of these tasks. Mathias Dopfner, CEO of Bild’s parent company Axel Springer, said in February that the media giant was making a major bet on A.I., which he said could “make independent journalism better than it ever was—or replace it.”

Databricks buys A.I. startup MosaicML for $1.3 billion. Databricks, a data storage and management company based in San Francisco, announced it is acquiring MosaicML, a generative-A.I. startup also based in the city, in a deal worth approximately $1.3 billion, the Wall Street Journal reported. MosaicML specializes in allowing companies to create language models that can perform many of the same tasks that the ultra-large language models from companies such as OpenAI and Google can, but which use considerably fewer variables and use much less data, making them both easier and cheaper to train and run, and allowing a customer to train the model on its own data. The acquisition, expected to close by July 31, will allow MosaicML to continue as a standalone service within Databricks, the Journal wrote. It is believed to be the most valuable acquisition to date in the generative A.I. boom.

Readers Also Like:  Boost Your Website With This Service! | by Swave Tech | Nov, 2023 - Medium

Generative A.I. phenom and open-source champion Stability AI loses key staff. Stability AI, the London-based startup best known for its popular open-source text-to-image model Stable Diffusion, has lost two of its top executives, including head of research David Ha and chief operating officer Ren Ito, Bloomberg reported. Stability CEO and Founder Emad Mostaque said Ito was “let go” as part of a broader management shake-up, while the company said that Ha, who had joined Stability from the advanced A.I. research lab Google Brain, was “taking a break” from the company “for personal reasons.” Mostaque said the company was continuing to hire and now employed more than 185 people. He also has disputed recent allegations in a Forbes story that he exaggerated Stability’s achievements and partnerships and that the company, valued at more than $1 billion in its last funding round in October, is struggling to raise another major venture capital round and has sometimes failed to pay employees in a timely manner.

Data label contractors sue Google claiming they were fired for raising concerns about harsh working conditions while trying to perfect Google’s chatbot, Bard. Contractors who work on Google’s A.I. chatbot, Bard, claim they are often under unrelenting pressure to provide feedback on the chatbot’s responses quickly at the expense of quality labels, according to a report in tech publication The Register. Hired through data services provider Appen, the workers are responsible for verifying the accuracy of the bot’s responses and providing feedback for improvements. However, according to contractor Ed Stackhouse, they are often given insufficient time to fact-check Bard’s outputs. The contractors have also voiced concerns over how their working conditions could lead to Bard disseminating potentially harmful inaccuracies, particularly in sensitive areas such as health care or politics. A group of six workers, including Stackhouse, claim they were fired for speaking out and have filed an unfair labor practice complaint with the National Labor Relations Board.

Photo and art contributors complain about process behind Adobe’s Firefly generative A.I. The company’s text-to-image generation A.I. model Firefly has proved popular in part because it was trained using images users had uploaded to Adobe Stock, which Adobe has said removes any issues concerning copyright. The company says its licensing terms give it the right to use Stock images for A.I. training, but that hasn’t stopped Stock contributors from complaining, according to a story in Venture Beat. The Stock contributors argue that Adobe didn’t provide adequate notice or seek their consent to use their images to train Firefly, nor has the company announced exactly how they will be compensated for the use of their images to power Firefly. Some also have claimed that Firefly has undercut the earnings potential of their work, with the influx of Firefly-generated images into Adobe Stock cannibalizing the platform. Legal experts told the publication though that the Stock contributors may not have a viable legal case, as Adobe’s terms of service provide broad use rights, and the A.I.-generated images are not technically derivatives of the contributors’ works.

Everyone wants an open-source language model—to create a sexbot. That’s what a Washington Post story concludes. The piece looks at some of the most popular uses for Meta’s open-source large language model LLaMA, whose code and model weights leaked onto the web in March. Since then, developers have used the model to build all kinds of chatbots, including ones tailored for sexual banter. The rise of LLaMA-powered sexbots underscores both the popularity of open-source A.I. and some of the dangers the phenomenon poses. Companies building open-source software have far fewer methods to police the ways in which their A.I. models get used compared to companies that only allow access through a proprietary API.

Readers Also Like:  Technological cooperation is cementing U.S.-India security ties - Nikkei Asia

EYE ON A.I. RESEARCH

None of today’s foundation models comply with the draft EU A.I. Act. That was the conclusion of researchers at Stanford University who looked at whether many large language models would be in compliance with the new EU rules if they were enacted as currently drafted. Bloom, the open-source LLM created by Hugging Face, came the closest, falling down only on its inability able to track and disclose exactly where in the EU the model is being used. (The inability to monitor and police downstream uses is a major weakness of open-source software governance in general.) Eight of the other nine popular models Stanford assessed fell down on transparency around the use of copyrighted data. Six of the 10 scored less than 50% across all the metrics. The Financial Times predicted “a looming clash between companies spending billions of dollars developing sophisticated AI models, often with the support of politicians who view the technology as central to national security, and global regulators intent on curbing its risks.”

FORTUNE ON A.I.

Sam Altman says he doesn’t need equity in $27 billion OpenAI because he already has ‘enough money’ and is motivated by other ‘selfish’ benefits—by Kylie Robison

Amazon isn’t launching a Google-style ‘Code Red’ to catch up with Big Tech rivals on A.I.: ‘We’re 3 steps into a 10K race’—by Eleanor Pringle

Marc Andreessen says we’re in a ‘freeze-frame moment’ with A.I.—and has advice for young people—by Steve Mollman

From free Stanford courses at Northrop Grumman to revamped internships at Salesforce, here’s how 9 companies are competing on A.I. talent—by Paolo Confino

BRAINFOOD

The old web is dying. A.I. is about to deliver the coup de grâce. In its place, a new web is being born. That’s the argument of a thoughtful and provocative essay by James Vincent in The Verge. I highly recommend checking it out. Vincent notes several distinct, but related, trends across the web. What they all have in common is that they are damaging or devaluing the quality of information available on the internet, mostly by supplanting human expertise and human-created, or at least human-curated, content, with massive amounts of auto-generated content. Generative A.I. supercharges this trend. But as Vincent notes, the A.I.-generated content is derivative, and often inaccurate or misleading in subtle and insidious ways. And that trend will only get worse as this convincing, but inaccurate, information is fed into future A.I. systems that will churn out yet more derivative and inaccurate content. Quality sites could protect themselves by walling themselves off from Google and the other search engines that will increasingly be powered by generative A.I., but only at the expense of current business models and the cost of the very openness on which the web was predicated. Vincent rightly notes that: “Essentially, this is a battle over information—over who makes it, how you access it, and who gets paid.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.