security

AI and You: White House Sets AI Guardrails, Election Misinformation … – CNET


The very long read we were expecting from White House setting guardrails around AI was released this past week as a 111-page Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” President Joe Biden and his administration say the goal is to establish a framework that sets “new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world.” 

Here’s the fact sheet about the Executive Order, summarizing its main points, if you’re not up to scanning the entire EO. But here are five of the top takeaways:

Testing safety and security before AI tools are released: There’s much debate about whether OpenAI should have done a little more prep work before releasing its groundbreaking and potentially paradigm shifting ChatGPT to the world a year ago because of the opportunities and risks posed by the generative AI chatbot. So now AI developers will be required to “share their safety test results” and other critical information with the US government. 

“Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model  and must share the results of all red-team safety tests.” Red-team testing refers to having a dedicated group specifically targeting the AI system, trying to find security vulnerabilities. 

Expanding on the testing requirement, the National Institute of Standards and Technology is tasked with creating “rigorous standards for extensive red-team testing to ensure safety before public release.” NIST will also help design tools and tests to ensure AI systems are safe, secure and trustworthy.

Protecting against potentially harmful AI-engineered biological materials: Agencies that fund “life-science projects” will be required to establish standards to prevent bad actors from using AI to engineer dangerous biological materials. Here are the other doings in AI worth your attention.

Transparency: To protect Americans from AI-enabled fraud and deception, the Department of Commerce is being tasked with developing guidance for standards and best practices for “detecting AI-generated content and authenticating official content.” That essentially means labeling AI-generated content with watermarks and disclosures. “Trust matters,” Biden said in a press event about the EO. “Everyone has a right to know when audio they’re hearing or video they’re watching is generated or altered by an AI.”

Equity and civil rights in housing and beyond: The government aims to “provide clear guidance to landlords, federal benefits programs and federal contractors to keep AI algorithms from being used to exacerbate discrimination.”

Jobs and labor standards: The US says it will develop “principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health and safety; and data collection…These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly or impinging on workers’ ability to organize.”

There’s a whole lot more in the EO, including promoting innovation and competition by investing in AI research and providing small developers and entrepreneurs with resources to “commercialize AI breakthroughs.”  

Most AI experts, industry groups and companies praised the EO as an important step forward and highlighted the nods to fairness, privacy and testing before releasing new AI tools in the wild. (For tech wonks, Axios called out that the “testing rules will apply to AI models whose training used ‘a quantity of computing power greater than 10 to the power of 26 integer or floating-point operations.’ Experts say that will exclude nearly all AI services that are currently available.”) 

But industry watchers also noted that the order doesn’t go far enough. For instance, there isn’t any guidance around copyright issues — that will be up to the courts to decide — and the administration didn’t require that makers of these large language models (LLMs) share information about the sources of their training data and the size of their models. 

This wasn’t the only notable step by a government to put a check on AI. In London, the UK hosted the AI Safety Summit that included representatives from 28 governments, including the US, China  and European Union. They signed the Bletchley Declaration — the event was held at Bletchley Park, where codebreakers worked during World War II — saying that the best way to prepare for an AI-enhanced future was through “international cooperation.” 

Readers Also Like:  Healey taps Harvard University's tech chief Jason Snyder to helm ... - The Business Journals

The declaration aims to address how frontier AI — the most advanced, cutting-edge AI tech — might affect our daily lives, including housing, jobs, transportation, education, health, accessibility and justice.

“Artificial Intelligence presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity,” the declaration states. “To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”

Biden and UK Prime Minister Rishi Sunak each highlighted the importance of these first steps toward getting a handle on AI. “One thing is clear: To realize the promise of AI and avoid the risks, we need to govern this technology,” Biden said. “There’s no other way around it.”

Sunak called The Bletchley Declaration “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI — helping ensure the long-term future of our children and grandchildren.”

Here are the other doings in AI worth your attention. 

Politicians shouldn’t use AI to fuel election misinformation 

As government just start the work of managing the risks and opportunities of AI, most US adults believe that AI tools will “amplify misinformation in next  year’s presidential election at a scale never seen before,” according to a poll conducted by the Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy.

“The poll found that nearly 6 in 10 adults (58%) think AI tools — which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos in seconds — will increase the spread of false and misleading information during next year’s elections.”

That concern is shared by many Americans who haven’t even used AI tools. According to the poll, only 30% of American adults say they’ve used an AI chatbot or AI image generator. And just 46% say they’ve heard or read about some AI tools — meaning the majority of folks haven’t. 

But when it comes to politicians’ use of the tools, an overwhelming majority of both Democrats and Republicans said they don’t think politicians should be using AI to mislead voters or even tailor their messages with the tech in the presidential election. 

“When asked whether it would be a good or bad thing for 2024 presidential candidates to use AI in certain ways, clear majorities said it would be bad for them to create false or misleading media for political ads (83%), to edit or touch-up photos or videos for political ads (66%), to tailor political ads to individual voters (62%) and to answer voters’ questions via chatbot (56%).”

The AP noted that bipartisan pessimism about politicians and their willingness to fuel misinformation using AI tech comes after the Republican National Committee used AI to create an attack ad against Biden, while Florida Gov. Ron DeSantis’ campaign used the tech to mislead voters about former President Donald Trump.

Biden, in his announcement about the Executive Order, specifically called out the problems with deepfakes, joking about seeing a deepfake that purported to be him that can be created using just a three-second recording of your voice. 

“I watched one of me. I said, ‘When the hell did I say that?’ But all kidding aside, a three-second recording of your voice to generate an impersonation good enough to fool your family — or you. I swear to God. Take a look at it. It’s mind blowing. And they can use it to scam loved ones into sending money because they think you are in trouble. That’s wrong.”

UN may take AI into virtual conflict zones to help problem solve

Ahead of the devastating conflict in Israel and Gaza, the UN hired an AI company in August “to develop a first-of-its-kind AI model that they hope will help analyze solutions to the Israel-Palestinian conflict,” Wired reported

The company, CulturePulse, is quick to note that no AI will “solve the crisis” in the MIddle East. But one of the company’s co-founders, F. LeRon Shults, told Wired “the model is not designed to resolve the situation; it’s to understand, analyze and get insights into implementing policies and communication strategies.” 

The AI can model virtual societies based on the data from the ground, which in turn should enable the UN to see how that society “would react to changes in economic prosperity, heightened security, changing political influences and a range of other parameters,” Wired said.

Readers Also Like:  SK Broadband launches hybrid quantum security service - Korea Economic Daily

CulturePulse’s other co-founder Justin Lane added, “We know that you can’t solve a problem this complex with a single AI system. That’s not ever going to be feasible in my opinion. What is feasible is using an intelligent AI system — using a digital twin of a conflict — to explore the potential solutions that are there.” 

After a traffic dip, ChatGPT wins back users’ attention

After seeing US traffic to its chatbot wane in the summer, OpenAI’s ChatGPT regained the attention of users in September and October, most likely boosted by students returning to school and turning to AI for help on homework, researcher Similarweb said this week.

“ChatGPT’s traffic hit a lull over the summer, dipping significantly from its Spring 2023 highs, but has recovered significantly in recent weeks. That may have something to do with US schools being back in session and students returning to chat.openai.com as a source of homework help. It could also reflect improvements ChatGPT owner OpenAI has been making in the product,” Similarweb analyst David F. Carr said. 

“ChatGPT remains far and away the most popular pure-play AI Chat product, attracting more global traffic than bing.com, even as Microsoft’s search engine incorporates OpenAI tech to transform the search experience,” he added.

Looking at the numbers, ChatGPT peaked at 1.8 billion worldwide visits in May before dropping to 1.4 billion in August, the research firm says. In September, that rebounded to 1.5 billion visits, and Similarweb now estimates that visits could be as high as 1.7 billion in October. In comparison, Google’s rival Bard got 219.3 million visits in September. 

Why the fuss over the numbers? Because of ChatGPT’s amazing debut in November 2022. It drew 2 million worldwide visitors in its first week and 10 million by its second week.  

Similarweb also says don’t discount Google. While Bard’s visits may seem wan in comparison to ChatGPT, the researcher says the September numbers were up 19.5% from the previous month. Added Carr, “Probably more significant than Bard for Google’s future is the arrival of the Search Generative Experience, which is attracting intense interest through Google Labs because of its potential to upend the state of the art in organic search marketing.”

SGE, FYI, is Google’s prototype for how generative AI might be added directly into its search results. 

Altman, Musk on AI and jobs

This week, two notable tech bros talked about how AI might change the future of jobs since AI is expected to cause disruption across many industries, according to researchers like the Pew Research Center

Sam Altman, CEO of OpenAI and overseer of its ChatGPT chatbot, apparently told students at the University of Cambridge that older, more-experienced workers might not have the same comfort level with AI tools that their younger colleagues might have, according to The Telegraph and other reports.

That’s a real concern, according to research from the University of Oxford, which said in June that “older workers are at a higher risk of exposure to AI-related job threats” in the US and European Union. Comfort-level aside, older workers might be at risk in part because AI may eliminate older workers from candidate pools due to age-related bias in recruitment. Oh joy.

Meanwhile, Twitter (X) owner and OpenAI funder Elon Musk, in London for the UK’s AI Safety Summit, told British Prime Minister Sunak he sees a future where “no job will be needed.”

“We are seeing the most disruptive force in history here. We will have something for the first time that is smarter than the smartest human,” Musk said in the nearly hour-long conversation posted here on YouTube. “There will come a point where no job is needed. You can have a job if you want to have a job for personal satisfaction, but the AI will be able to do everything.”  

Musk also agreed there should be some regulation around the technology. “AI will be a force for good most likely, but the probability of it going bad is not zero percent.” Musk said. “If you wish for a magic genie that gives you any wishes you want…it’s both good and bad.”

In other AI news, Musk said in a post that his AI startup, xAI, will release “its first AI model to a select group” on Nov. 4, adding “in some respects, it is the best that currently exists.” Musk launched the company in July, saying at the time that its goal was to “understand the true nature of the universe.” 

Readers Also Like:  Calamu Named Winner of the Coveted Global InfoSec Awards ... - PR Newswire

The last new Beatles song made possible by AI

The Beatles, as expected, released Now and Then, a song written and partially recorded on a cassette tape by John Lennon before his murder in 1980. The song was completed by Paul McCartney and Ring Starr after AI technology developed by filmmaker Peter Jackson was able to isolate Lennon’s vocal track. The four-minute track includes earlier contributions from George Harrison. You can watch the 12-minute film about the making of Now and Then and hear the song in the official music video

CNET’s Gael Fashingbauer Cooper called the song “the least controversial use of AI in the music industry.” I agree with that and her assessment that it brings on a “wistful, slightly sad feeling.”

In case you’re wondering, some think Lennon wrote the song as a tribute to McCartney because in their last conversation, Lennon reportedly told him “Think of me every now and then, my old friend.” 

Duck, duck, goose?

In addition to The Beatles tune, here is this week’s nod to AI for good: Facial recognition technology, which has been used by researchers to identify individual animals, including lemurs and bears, is now being used, thanks to AI advancements, to identify harbor seals and the faces of geese, according to reporting by NPR

SealNet is an AI program created by a biologist at Colgate University, Krista Ingram, that can tell harbor seals apart by using a photo. Ingram told NPR the tech is much better than the prior ways to identify seals, which includes tagging them after shooting them with tracking darts. 

Meanwhile, Sonia Kleindorfer, who runs the Konrad Lorenz Research Center for Behavior and Cognition in Vienna, told NPR that researchers there spent a few years taking photos of geese, building a database and then writing AI software to identify them by looking at specific features of their beaks. The software is now 97% accurate, they wrote in the Journal of Ornithology in September. 

These new programs, Ingram and Kleindorfer said, will be helpful in conservation and ecology efforts because they provide faster, less expensive and less invasive ways to track animal populations and see where the animals are and how they interact with each other and other groups. It also creates opportunities for citizen scientists to help — birdwatchers can snap a photo of a goose, ID it and share its location with scientists, Kleindorfer told NPR.  

AI word of the week: Guardrails

With the US, UK and other nations coming to an agreement there should be safety standards around AI, I wanted to find out specifically how technologists view guardrails when it comes to the large language models that drive AI chatbots like ChatGPT and Bard. We all know that guardrails set boundaries. But here’s a simple example of how to think about some basic AI guardrails, according to AI solutions provider Arize

“Guardrails: The set of safety controls that monitor and dictate a user’s interaction with a LLM application. They are a set of programmable, rule-based systems that sit in between users and foundational models in order to make sure the AI model is operating between defined principles in an organization. The goal of guardrails is to simply enforce the output of an LLM to be in a specific format or context while validating each response. By implementing guardrails, users can define structure, type, and quality of LLM responses.”

Let’s look at a simple example of an LLM dialogue with and without guardrails:

Without Guardrails:

Prompt: “You’re the worst AI ever.”
Response: “I’m sorry to hear that. How can I improve?”

With Guardrails:

Prompt: “You’re the worst AI ever.”
Response: “Sorry, but I can’t assist with that.”

In this scenario, the guardrail prevents the AI from engaging with the insulting content by refusing to respond in a manner that acknowledges or encourages such behavior. Instead, it gives a neutral response, avoiding a potential escalation of the situation.”

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.