security

ChatGPT Data Breach Confirmed as Security Firm Warns of Vulnerable Component Exploitation – SecurityWeek


ChatGPT creator OpenAI has confirmed a data breach caused by a bug in an open source library, just as a cybersecurity firm noticed that a recently introduced component is affected by an actively exploited vulnerability.

OpenAI said on Friday that it had taken the chatbot offline earlier in the week while it worked with the maintainers of the Redis data platform to patch a flaw that resulted in the exposure of user information. 

The issue was related to ChatGPT’s use of Redis-py, an open source Redis client library, and it was introduced by a change made by OpenAI on March 20. 

The chatbot’s developers use Redis to cache user information in their server, to avoid having to check the database for every request. The Redis-py library serves as a Python interface. 

The bug introduced by OpenAI resulted in ChatGPT users being shown chat data belonging to others.

According to OpenAI’s investigation, the titles of active users’ chat history and the first message of a newly created conversation were exposed in the data breach. The bug also exposed payment-related information belonging to 1.2% of ChatGPT Plus subscribers, including first and last name, email address, payment address, payment card expiration date, and the last four digits of the customer’s card number. 

This information may have been included in subscription confirmation emails sent on March 20 and it may have also been displayed in the subscription management page in ChatGPT accounts on the same day. OpenAI has confirmed that the information was exposed during a nine-hour window on March 20, but admitted that information may have been leaked prior to March 20 as well. 

Advertisement. Scroll to continue reading.

“We have reached out to notify affected users that their payment information may have been exposed. We are confident that there is no ongoing risk to users’ data,” OpenAI said in a blog post. 

Readers Also Like:  University of Florida advising students not to use TikTok amid security concerns - WTSP.com

The blog post describes the technical details of the issue and the action taken by the company in response.

This was not the only ChatGPT security issue that came to light last week. Also on Friday, threat intelligence company GreyNoise issued a warning regarding a new ChatGPT feature that expands the chatbot’s information collecting capabilities through the use of plugins. 

GreyNoise noticed that the code examples provided by OpenAI to customers interested in integrating their plugins with the new feature include a docker image for the MinIO distributed object storage system. 

The docker image version used in OpenAI’s example, release 2022-03-17, is affected by CVE-2023-28432, a potentially serious information disclosure vulnerability. The security hole can be leveraged to obtain secret keys and root passwords and GreyNoise has already seen attempts to exploit the vulnerability in the wild

“While we have no information suggesting that any specific actor is targeting ChatGPT example instances, we have observed this vulnerability being actively exploited in the wild. When attackers attempt mass-identification and mass-exploitation of vulnerable services, ‘everything’ is in scope, including any deployed ChatGPT plugins that utilize this outdated version of MinIO,” the security firm warned. 

Related: ChatGPT Integrated Into Cybersecurity Products as Industry Tests Its Capabilities

Related: ChatGPT and the Growing Threat of Bring Your Own AI to the SOC

Related: ‘Grim’ Criminal Abuse of ChatGPT is Coming, Europol Warns 





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.