security

5 ChatGPT security risks in the enterprise – TechTarget


While ChatGPT has generated unprecedented buzz in the enterprise, not all IT leaders have met the AI chatbot with open arms. Verizon blocked its employees from accessing the program at work, citing executives’ concerns about ChatGPT security risks. And, according to The Wall Street Journal, JPMorgan Chase & Co. also restricted staff from using the large language model because of compliance issues. Forbes reported many other organizations — including Amazon, Bank of America, Citigroup, Deutsche Bank, Goldman Sachs and Wells Fargo — have followed suit in limiting employees’ ChatGPT use.

Cybercriminals, meanwhile, are already using ChatGPT to develop malicious tools, according to Check Point Research’s analysis of activity in underground hacking communities. Clearly, generative AI could change the threat landscape for the worse in the months and years ahead.

Consider the following five ChatGPT security risks in the enterprise.

1. Malware

Generative AI that writes ethical code can also write malware. And, while ChatGPT rejects prompts it recognizes as explicitly illegal or nefarious, users have found they can evade its guardrails fairly easily. For example, malicious hackers might ask ChatGPT to generate code for penetration testing, only to then tweak and repurpose it for use in cyber attacks.

While ChatGPT rejects prompts it recognizes as explicitly illegal or nefarious, users have found they can evade its guardrails fairly easily.

Even as ChatGPT’s creators continually work to stop jailbreaking prompts that bypass the app’s controls, users will inevitably keep pushing its boundaries and finding new workarounds. Consider the Reddit group that has repeatedly tricked ChatGPT into roleplaying a fictional AI persona — named DAN, short for “Do Anything Now” — that responds to queries without ethical constraints.

2. Phishing and other social engineering attacks

One in five data breaches involves social engineering, according to Verizon’s “2022 Data Breach Investigations Report.” Generative AI will likely make this persistent problem a lot worse, with many cybersecurity leaders bracing for more — and more sophisticated — phishing attacks in the future.

Thanks to the rich data set ChatGPT references, perpetrators who use it have a much higher likelihood of waging successful social engineering campaigns. Clunky writing, misspellings and grammar mistakes often alert users to attempted phishing attacks. But, with generative AI, cybercriminals can instantaneously generate highly convincing text, customize it to target specific victims — i.e., spear phishing — and tailor it to fit various mediums, such as email, direct messages, phone calls, chatbots, social media commentary and spurious websites. Attackers could even use ChatGPT’s output, in conjunction with AI-based voice-spoofing and image generation software, to create sophisticated deepfake phishing campaigns.

3. Exposure of sensitive data

Without proper security education and training, ChatGPT users could inadvertently put sensitive information at risk. Over the course of a single week in early 2023, employees at the average 100,000-person company entered confidential business data into ChatGPT 199 times, according to research from data security vendor Cyberhaven.

Users may not realize that, rather than keeping their input private, the publicly available version of ChatGPT uses it to learn and respond to future requests. Note: Enterprise-level ChatGPT API integrations may maintain data privacy.

The Cyberhaven researchers suggested, for example, the following scenario: Imagine an executive asks ChatGPT to create PowerPoint slides for an internal presentation and copy-pastes a corporate strategy document into the app for reference. Future ChatGPT queries about the company’s strategic priorities — possibly even from users at rival firms — could elicit details directly from the private corporate strategy document the executive shared.

4. More skilled cybercriminals

Generative AI will likely have many positive educational benefits, such as improving training for entry-level security analysts. On the flip side of the coin, however, ChatGPT may also give aspiring malicious hackers a way to efficiently and effectively build their skills.

For instance, an inexperienced threat actor might ask ChatGPT how to hack a website or deploy ransomware. As previously noted, OpenAI’s policies aim to prevent the chatbot from supporting such obviously illegal activity. By masquerading as a pen tester, however, the malicious hacker may be able to reframe the question in such a way that ChatGPT responds with detailed, step-by-step instructions.

Generative AI tools, such as ChatGPT, could help millions of new cybercriminals gain technical proficiency, leading to elevated security risk levels overall.

5. API attacks

As the number of APIs in enterprises continues to grow exponentially, so does the number of API attacks. According to researchers at API security company Salt Security, the number of unique attackers targeting customers’ APIs increased 874% in the last six months of 2022 alone.

Analysts at Forrester have predicted cybercriminals may eventually use generative AI to find APIs’ unique vulnerabilities — a process that otherwise takes significant time and energy. Theoretically, attackers may be able to prompt ChatGPT to review API documentation, aggregate information and craft API queries, with the goal of uncovering and exploiting flaws more efficiently and effectively.



READ SOURCE

Readers Also Like:  Microsoft putting Windows 10 on life support won't push me to ... - TechRadar

This website uses cookies. By continuing to use this site, you accept our use of cookies.