security

ChatGPT's Data Protection Blind Spots and How Security Teams Can Solve Them – The Hacker News


In the short time since their inception, ChatGPT and other generative AI platforms have rightfully gained the reputation of ultimate productivity boosters. However, the very same technology that enables rapid production of high-quality text on demand, can at the same time expose sensitive corporate data. A recent incident, in which Samsung software engineers pasted proprietary code into ChatGPT, clearly demonstrates that this tool can easily become a potential data leakage channel. This vulnerability introduces a demanding challenge for security stakeholders, since none of the existing data protection tools can ensure no sensitive data is exposed to ChatGPT. In this article we’ll explore this security challenge in detail and show how browser security solutions can provide a solution. All while enabling organizations to fully realize ChatGPT’s productivity potential and without having to compromise on data security.

The ChatGPT data protection blind spot: How can you govern text insertion in the browser?

Whenever an employee pastes or types text into ChatGPT, the text is no longer controlled by the corporate’s data protection tools and policies. It doesn’t matter if the text was copied from a traditional data file, an online doc, or another source. That, in fact, is the problem. Data Leak Prevention (DLP) solutions – from on-prem agents to CASB – are all file-oriented. They apply policies on files based on their content, while preventing actions such as modifying, downloading, sharing, and more. However, this capability is of little use for ChatGPT data protection. There are no files involved in ChatGPT. Rather, usage involves pasting copied text snippets or typing directly into a web page, which is beyond the governance and control of any existing DLP product.

Readers Also Like:  The Rockefeller Foundation Report Identifies Steps to Strengthen ... - PR Newswire

How browser security solutions prevent insecure data usage in ChatGPT

LayerX launched its browser security platform for continuous monitoring, risk analysis, and real-time protection of browser sessions. Delivered as a browser extension, LayerX has granular visibility into every event that takes place within the session. This enables LayerX to detect risky behavior and configure policies to prevent pre-defined actions from taking place.

In the context of protecting sensitive data from being uploaded to ChatGPT, LayerX leverages this visibility to single out attempted text insertion events, such as ‘paste’ and ‘type’, within the ChatGPT tab. If the text’s content in the ‘paste’ event violates the corporate data protection policies, LayerX will prevent the action altogether.

To enable this capability, security teams using LayerX should define the phrases or regular expressions they want to protect from exposure. Then, they need to create a LayerX policy that is triggered whenever there’s a match with these strings.

See what it looks like in action:

Setting the policy in the LayerX Dashboard
A user that tries to copy sensitive information into ChatGPT gets blocked by LayerX

In addition, organizations that wish to prevent their employees from using ChatGPT altogether, can use LayerX to block access to the ChatGPT website or to any other online AI-based text generators, including ChatGPT-like browser extensions.

Learn more about LayerX ChatGPT data protection here.

Using LayerX’s browser security platform to gain comprehensive SaaS protection

The difference that makes LayerX the only solution that can effectively address the ChatGPT data protection gap is its placement in the browser itself, with real-time visibility and policy enforcement on the actual browser session. This approach also makes it an ideal solution for protecting from any cyber threat that targets data or user activity in the browser, as is the case with SaaS applications.

Readers Also Like:  Lapsus$: Court finds teenagers carried out hacking spree - bbc.com

Users interact with SaaS apps through their browsers. This makes it easy for LayerX to protect both the data within these apps as well as the apps themselves. This is achieved by enforcing the following types of policies on users’ activities throughout the web sessions:

Data protection policies: On top of standard file-oriented protection (prevention of copy/share/download/etc.), LayerX provides the same granular protection it does for ChatGPT. In fact, once the organization has defined which inputs it bans pasting, the same policies can be expanded to prevent exposing this data to any web or SaaS location.

Account compromise mitigation: LayerX monitors each user’s activities on the organization’s SaaS apps. The platform will detect any anomalous behavior or data interaction that indicates that the user’s account is compromised. LayerX policies will then trigger either the termination of the session or disable any data interaction abilities for the user in the app.

Learn more about LayerX ChatGPT data protection here.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.