security

ChatGPT banned in Italy pending data protection investigation – Sky News


It is the first time a national regulator has made such a move against ChatGPT, OpenAI’s wildly successful chatbot that launched late last year. While the tech has wowed many with its ability to write computer code and pass tough exams, its development has left some concerned.

By Tom Acres, technology reporter


ChatGPT has been banned in Italy while the country’s data protection authorities investigate its collection of user information.

The chatbot, which is operated by OpenAI and financially backed by Microsoft, has amassed more than 100 million monthly active users since launching late last year.

It has threatened to upend everything from search engines to creative writing, though there have been concerns regarding accuracy and biases.

On Friday, Italian authorities said it would be blocked pending an investigation into a suspected breach of its data collection rules and a failure to check the age of its users.

The software is supposed to be reserved for people aged 13 and over.

Italy‘s data protection agency also alleged that the San Francisco-based firm had no “legal basis that justifies the massive collection and storage of personal data in order to ‘train’ the algorithms underlying the operation”.

It is the first time a national regulator has made such a move against ChatGPT, which last week fell victim to a cyber security breach that exposed some user conversations and payment details.

OpenAI did not immediately respond to a request for comment.

Read more:
Let AI read you our articles
Assessing UK’s ‘light touch’ AI regulation

‘No surprise’ if more regulators take action

The ban comes after EU law enforcement agency Europol warned that ChatGPT could be used by criminals and to spread disinformation online.

Data, privacy, and cyber security lawyer Edward Machin, of Ropes and Gray, said “it wouldn’t be surprising” to see more regulators follow Italy’s lead.

“It’s easy to forget ChatGPT has only been widely used for a matter of weeks,” he said.

“Most users won’t have stopped to consider the privacy implications of their data being used to train the algorithms that underpin the product. Although they may be willing to accept that trade, the allegation here is users aren’t being given the information to allow them to make an informed decision, and more problematically, that in any event there may not be a lawful basis to process their data.”

Click to subscribe to the Sky News Daily wherever you get your podcasts

While ChatGPT has wowed many observers with its ability to write computer code, solve problems, and pass the toughest exams, the rate of its development and adoption has left some worried.

Elon Musk joined a group of AI experts this week in calling for a pause in the training of systems like ChatGPT, which are known as large language models.

They are trained using a huge amount of information from the internet and beyond, such as books.

The letter, issued by the Future of Life Institute and signed by more than 1,000 people, warned “AI systems with human-competitive intelligence can pose profound risks to society and humanity”.

Readers Also Like:  Tech deals: Shop the Eufy sale for deals on security cameras and ... - USA TODAY

It followed the release of OpenAI’s GPT-4, a new and improved incarnation of the tech behind its chatbot. It already powers Microsoft’s Bing search engine and is being added into Office apps like Teams and Outlook.

Read more:
What is GPT-4 and how is it improved?


This is a limited version of the story so unfortunately this content is not available.

Open the full version

Four AI experts whose work was cited in the letter have since distanced themselves from the call.

Professor Emily Bender, of the University of Washington, said some of the claims made were “unhinged”.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.