security

AI and You: Big Tech Says AI Regulation Needed, Microsoft Takes … – CNET


In a move that should surprise no one, tech leaders who gathered at closed-door meetings in Washington, DC, this week to discuss AI regulation with legislators and industry groups agreed on the need for laws governing generative AI technology. But they couldn’t agree on how to approach those regulations. 

“The Democratic senator Chuck Schumer, who called the meeting ‘historic,’ said that attendees loosely endorsed the idea of regulations but that there was little consensus on what such rules would look like,” The Guardian reported. “Schumer said he asked everyone in the room — including more than 60 senators, almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and that ‘every single person raised their hands, even though they had diverse views.'”

I guess “diverse views” is a new way of saying “the devil is in the details.”

Tech CEOs and leaders in attendance at what Schumer called the AI Insight Forum included OpenAI’s Sam Altman, Google’s Sundar Pichai, Meta’s Mark Zuckerberg, Microsoft co-founder Bill Gates and X/Twitter owner Elon Musk. Others in the room included Motion Picture Association CEO Charles Rivkin; former Google chief Eric Schmidt; Center for Humane Technology co-founder Tristan Harris; Deborah Raji, a researcher at the University of California, Berkeley; AFL-CIO President Elizabeth Shuler; Randi Weingarten, president of the American Federation of Teachers; Janet Murguía, president of Latino civil rights and advocacy group UnidosUS; and Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights, the Guardian said.

“Regulate AI risk, not AI algorithms,” IBM CEO Arvind Krishna said in a statement. “Not all uses of AI carry the same level of risk. We should regulate end uses — when, where, and how AI products are used. This helps promote both innovation and accountability.”

In addition to discussing how the 2024 US elections could be protected against AI-fueled misinformation, the group talked with 60 senators from both parties about whether there should be an independent AI agency and about “how companies could be more transparent and how the US can stay ahead of China and other countries,” the Guardian reported. 

The AFL-CIO also raised the issue of workers’ rights because of the widespread impact AI is expected to have on the future of all kinds of jobs.  AFL-CIO chief Shuler, in a statement following the gathering, said workers are needed to help “harness artificial intelligence to create higher wages, good union jobs, and a better future for this country. … The interests of working people must be Congress’ North Star. Workers are not the victims of technological change — we’re the solution.”  

Meanwhile, others called out the meeting over who wasn’t there and noted that the opinions of tech leaders who stand to benefit from genAI technology should be weighed against other views.

“Half of the people in the room represent industries that will profit off lax AI regulations,” Caitlin Seeley George, a campaigns and managing director at digital rights group Fight for the Future, told The Guardian. “Tech companies have been running the AI game long enough and we know where that takes us.”

Readers Also Like:  Resurface Labs joins Tyk’s global ecosystem with next-level security for the API economy - tech.einnews.com

Meanwhile, the White House also said this week that a total of 15 notable tech companies have now signed on to a voluntary pledge to ensure AI systems are safe and are transparent about how they work. On top of the seven companies that initially signed on in July — OpenAI, Microsoft, Meta, Google, Amazon, Anthropic and Inflection AI — the Biden administration said an additional eight companies opted in. They are Adobe, Salesforce, IBM, Nvidia,, Palantir, Stability AI, Cohere and Scale AI. 

“The President has been clear: harness the benefits of AI, manage the risks, and move fast — very fast,” Jeff Zients, the White House chief of staff, said in a statement, according to The Washington Post. “And we are doing just that by partnering with the private sector and pulling every lever we have to get this done.”

But it remains a voluntary pledge and the view is that it doesn’t “go nearly as far as provisions in a bevy of draft regulatory bills submitted by members of Congress in recent weeks — and could be used as a rationale to slow-walk harder-edged legislation,” Axios reported in July. 

Here are the other doings in AI worth your attention.

Google launches Digital Futures Project to study AI 

Google this week announced the Digital Futures Project, “an initiative that aims to bring together a range of voices to promote efforts to understand and address the opportunities and challenges of artificial intelligence (AI). Through this project, we’ll support researchers, organize convenings and foster debate on public policy solutions to encourage the responsible development of AI.”

The company also said it would give $20 million in grants to “leading think tanks and academic institutions around the world to facilitate dialogue and inquiry into this important technology.” (That sounds like a big number until you remember that Alphabet/Google reported $18.4 billion in profit in the second quarter of 2023 alone.) 

Google says the first group of grants were given to the Aspen Institute, Brookings Institution, Carnegie Endowment for International Peace, the Center for a New American Security, the Center for Strategic and International Studies, the Institute for Security and Technology, the Leadership Conference Education Fund, MIT Work of the Future, the R Street Institute and SeedAI. 

The grants aside, getting AI right is a really, really big deal at Google, which is now battling for AI market dominance against OpenAI’s ChatGPT and Microsoft’s ChatGPT-powered Bing. Alphabet CEO Sundar Pichai told his 180,000 employees in a Sept. 5 letter celebrating the 25th anniversary of Google that “AI will be the biggest technological shift we see in our lifetimes. It’s bigger than the shift from desktop computing to mobile, and it may be bigger than the internet itself. It’s a fundamental rewiring of technology and an incredible accelerant of human ingenuity.”  

When asked by Wired if he was too cautious with Google’s AI investments and should have released Google Bard before OpenAI released ChatGPT in October 2022, Pichai essentially said he’s playing the long game. “The fact is, we could do more after people had seen how it works. It really won’t matter in the next five to 10 years.”

Readers Also Like:  EU: AI Act must ban dangerous, AI-powered technologies in historic ... - Amnesty International

Adobe adds AI to its creative toolset, including Photoshop

Firefly, Adobe’s family of generative AI tools, is out of beta testing. That means “creative types now have the green light to use it to create imagery in Photoshop, to try out wacky text effects on the Firefly website, to recolor images in Illustrator and to spruce up posters and videos made with Adobe Express,” reports CNET’s Stephen Shankland

Adobe will include credits to use Firefly in varying amounts depending on which Creative Cloud subscription plan you’re paying for. Shankland reported that if you have the full Creative Cloud subscription, which gets you access to all Adobe’s software for $55 per month, you can produce up to 1,000 AI creations a month. If you have a single-app subscription, to use Photoshop or Premiere Pro at $21 per month, it’s 500 AI creations a month. Subscriptions to Adobe Express, an all-purpose mobile app costing $10 per month, come with 250 uses of Firefly.

But take note: Adobe will raise its subscription prices about 9% to 10% in November, citing the addition of Firefly and other AI features, along with new tools and apps. So yes, all that AI fun comes at a price.

Microsoft offers to help AI developers with copyright protection

Copyright and intellectual property concerns come up often when talking about AI, since the law is still evolving around who owns AI-generated output and whether AI chatbots have scraped copyrighted content from the internet without owners’ permission.  

That’s led to Microsoft saying that developers who pay to use its commercial AI “Copilot” services to build AI products will be offered protection against lawsuits, with the company defending them in court and paying settlements. Microsoft said it’s offering the protection because the company and not its customers should figure out the right way to address the concerns of copyright and IP owners as the world of AI evolves. Microsoft also said it’s “incorporated filters and other technologies that are designed to reduce the likelihood that Copilots return infringing content.”

“As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved,” the company wrote in a blog post.

“This new commitment extends our existing intellectual property indemnity support to commercial Copilot services and builds on our previous AI Customer Commitments,” the post says. “Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products.”

Students log in to ChatGPT, find a friend on Character.ai

After a huge spike in traffic when OpenAI released ChatGPT last October, traffic to the chatbot dipped over the past few months as rival AI chatbots including Google Bard and Microsoft Bing came on the scene. But now that summer vacation is over, students seem to be driving an uptick in traffic for ChatGPT, according to estimates released by Similarweb, a digital data and analytics company.

Readers Also Like:  Netherlands says it will send Patriot assistance to Ukraine - WATN - Local 24

“ChatGPT continues to rank among the largest websites in the world, drawing 1.4 billion worldwide visits in August compared with 1.2 billion for Microsoft’s Bing search engine, for example. From zero prior to its launch in late November, chat.openai.com reached 266 million visitors in December, grew another 131% the following month, and peaked at 1.8 billion visits in May. Similarweb ranks openai.com #28 in the world, mostly on the strength of ChatGPT.”

But one of the AI sites to gain even more visitors is ChatGPT rival Character.ai, which invites users to personalize their chatbots as famous personalities or fictional characters and have them respond in that voice. Basically, you can have a conversation with a chatbot masquerading as a famous person like Cristiano Ronaldo, Taylor Swift, Albert Einstein or Lady Gaga or a fictional character like Super Mario, Tony Soprano or Abraham Lincoln.

“Connecting with the youth market is a reliable way of finding a big audience, and by that measure, ChatGPT competitor Character AI has an edge,” Similarweb said. “The character.ai website draws close to 60% of its audience from the 18-24-year-old age bracket, a number that held up well over the summer. Character.AI has also turned website users into users of its mobile app to a greater extent than ChatGPT, which is also now available as an app.”

The reason “may be simply because Character AI is a playful companion, not just a homework helper,” the research firm said. 

AI term of the week: AI safety 

With all the discussion around regulating AI, and how the technology should be “safe,” I thought it worthwhile to share a couple of examples of how AI safety is being characterized.

The first is a straightforward explanation from CNBC’s AI Glossary: How to Talk About AI Like an Insider:

AI safety: Describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.”

The second comes from a White House white paper called “Ensuring Safe, Secure and Trustworthy AI.” This outlines the voluntary commitments those 15 tech companies signed that aim to ensure their systems won’t harm people. 

Safety: Companies have a duty to make sure their products are safe before introducing them to the public. That means testing the safety and capabilities of their AI systems, subjecting them to external testing, assessing their potential biological, cybersecurity, and societal risks, and making the results of those assessments public.” 

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.