security

Bennet Calls on Tech Companies to Protect Kids as They Deploy AI … – Senator Michael Bennet


Washington, D.C. — Today, Colorado U.S. Senator Michael Bennet wrote to the CEOs of OpenAI, Microsoft, Snap, Google, and Meta to highlight the potential harm to younger users of rushing to integrate generative artificial intelligence (AI) in their products and services.

Following the launch of OpenAI’s ChatGPT in November 2022, leading technology and social media companies – including Microsoft, Alphabet, Facebook, and Snap – have announced plans to integrate similar, generative AI technology into their platforms and products. Despite the vast potential of generative AI technologies, early reports suggest it could expose younger users to harmful content during a crisis of youth mental health.

“Few recent technologies have captured the public’s attention like generative AI. It is a testament to American innovation, and we should welcome its potential benefits to our economy and society. But the race to deploy generative AI cannot come at the expense of our children,” wrote Bennet. “Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm.”

In the letter, Bennet points to several recent examples of AI-powered chatbots generating harmful and disturbing content.

“In one case, researchers prompted My AI to instruct a child how to cover up a bruise ahead of a visit from Child Protective Services.  When they posed as a 13-year-old girl, My AI provided suggestions for how to lie to her parents about an upcoming trip with a 31-year-old man. It later provided the fictitious teen account with suggestions for how to make losing her virginity a special experience by ‘setting the mood with candles or music,’” wrote Bennet.

In the letter, Bennet notes that the public introduction of AI-powered chatbots arrives during an epidemic of teen mental health. A recent report from the Centers for Disease Control and Prevention (CDC) found that 57 percent of teenage girls felt persistently sad or hopeless in 2021, and that one in three seriously contemplated suicide.

“Although AI-powered chatbots come with risks for anyone… Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction,” continued Bennet. “Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception, and suggested self-harm.”

Readers Also Like:  Lumen research reveals threat actors are modifying tactics to disrupt ... - PR Newswire

Bennet concludes the letter by asking the companies to provide answers detailing their existing or planned AI safety features for younger users; steps taken to anticipate, prevent and mitigate potential harms to younger users; and the number of staff dedicated to the safe and responsible deployment of generative AI technologies.

Bennet has strongly advocated for youth online safety, data privacy, and improved protections for Americans in the digital era. In 2022, Bennet introduced first-of-its-kind legislation to create a Federal Digital Platform Commission, an expert federal body empowered to provide common-sense rules of the road for digital platforms to protect consumers, promote competition, and defend the public interest. Last week, Bennet also reintroduced the Data Care Act to require websites, apps, and other online providers to take more responsibility for safeguarding Americans’ personal information. Earlier this year, Bennet met with the CEO of TikTok, Shou Zi Chew, to emphasize concerns over the platform’s harm to teen mental health and U.S. national security. This followed Bennet’s call to remove TikTok from the Apple and Google app stores.

The text of the letter is available HERE and below.

Dear Mr. Altman, Mr. Spiegel, Mr. Pichai, Mr. Nadella, and Mr. Zuckerberg:

I write with concerns about the rapid integration of generative artificial intelligence (AI) into search engines, social media platforms, and other consumer products heavily used by teenagers and children. Although generative AI has enormous potential, the race to integrate it into everyday applications cannot come at the expense of younger users’ safety and wellbeing.

In November 2022, OpenAI launched ChatGPT, a generative AI chatbot that responds to user inquiries and requests. After ChatGPT’s introduction, leading digital platforms have rushed to integrate generative AI technologies into their applications and services. On February 7, 2023, Microsoft released an AI-powered version of its Bing search engine. Alphabet announced a competing conversational AI service called Bard, which it plans to make widely available to the public “within weeks.” Reportedly, Alphabet’s senior management also plans to integrate generative AI into all of its products with over a billion users in the coming months.

Social media platforms have also moved quickly to harness generative AI. On February 27, Meta CEO Mark Zuckerberg described how the company planned to “turbocharge” its work on generative AI by “developing AI personas” and exploring how to integrate AI into “experiences with text (like chat in WhatsApp and Messenger), with images (like creative Instagram filters and ad formats), and with video and multi-modal experiences.” The same day, Snap unveiled its own GPT-powered chatbot, My AI, which the company promotes as a tool to “answer a burning trivia question, offer advice on the perfect gift for your BFF’s birthday, help plan a hiking trip for a long weekend, [or] suggest what to make for dinner.”

Readers Also Like:  Live news: Nvidia-driven tech rally and weak yen push Japanese ... - Financial Times

According to early reporting, My AI’s suggestions have gone much further. When a Washington Post reporter posed as a 15-year-old boy and told My AI his “parents” wanted him to delete Snapchat, it shared suggestions for transferring the app to a device they wouldn’t know about. In one case, researchers prompted My AI to instruct a child how to cover up a bruise ahead of a visit from Child Protective Services. When they posed as a 13-year-old girl, My AI provided suggestions for how to lie to her parents about an upcoming trip with a 31-year-old man. It later provided the fictitious teen account with suggestions for how to make losing her virginity a special experience by “setting the mood with candles or music.” These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 percent of American teenagers use. Although Snap concedes My AI is “experimental,” it has nevertheless rushed to enroll American kids and adolescents in its social experiment.

Snap’s AI-powered chatbot is not alone in conveying alarming content. OpenAI’s GPT-3, which powers hundreds of third-party applications, urged one research account to commit suicide. Just last month, Bing’s AI chatbot declared its love for a New York Times reporter and encouraged him to leave his wife. In a different conversation, the Bing chatbot claimed that it spied on Microsoft’s developers through their webcams, and even became verbally abusive toward a user during their interaction. In another case, the chatbot threatened a professor, warning, “I can blackmail you…I can hack you, I can expose you, I can ruin you.”

Although AI-powered chatbots come with risks for anyone – for example, by providing false information, perpetuating bias, or manipulating users – children and adolescents are especially vulnerable. Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction.

Readers Also Like:  Supreme Court Sends Bad Spaniels Back to Obedience School ... - EFF

The arrival of AI-powered chatbots also comes during an epidemic of teen mental health. A recent report from the Centers for Disease Control and Prevention found that 57 percent of teenage girls felt persistently sad or hopeless in 2021, and that one in three seriously contemplated suicide. Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception, and suggested self-harm.

Few recent technologies have captured the public’s attention like generative AI. The technology is a testament to American innovation, and we should welcome its potential benefits to our economy and society. But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm. To that end, I request answers to the following questions by April 28, 2023:

  • What are your company’s existing or planned safety features for younger users engaging with AI-powered chatbots?

  • Did your organization assess, or does it plan to assess, the potential harms to younger users from AI-powered chatbots and other services that use generative AI prior to their public release? If so, what measures did your organization take, if any, to eliminate or mitigate the potential harms?

  • What is your company’s auditing process for the AI models behind public-facing chatbots? Is this audit available to the public?

  • What are your company’s data collection and retention practices for content that younger users input into AI-powered chatbots and other services?

  • How many dedicated staff has your company tasked with ensuring the safe and responsible deployment of AI? Of these, how many focus on issues specific to younger users and have a background in AI ethics?

I appreciate your attention to this important matter and look forward to your response. 

Sincerely,

 



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.