security

That Didn't Take Long: Hackers Are Beginning to Leverage ChatGPT – TechDecisions


OpenAI’s conversational AI chatbot ChatGPT has captured the attention of the tech industry, with the technology already helping IT professionals and developers create scripts and write code via a free preview. The use cases extend to essentially replacing or adding to search engines to help create a more engaging experience for users looking for information.

However, the chatbot’s ability to write code and solve technical problems is also opening another use case: hacking.

According to cybersecurity firm Check Point Software, analysis of several major underground hacking forums show that hackers are already using ChatGPT to develop malicious tools, especially for those threat actors without development skills.

In one case analyzed by Check Point’s researchers, a threat actor posted on a forum about experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. One example included code of a Python-based stealer that searcher for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.

Indeed, the firm’s analysis of the script confirms the cybercriminals claims of creating a basic infostealer which searchers for 12 common file types, such as Microsoft Office documents, PDFs and images.

“If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web,” Check Point researchers write. “It is worth noting that the actor didn’t bother encrypting or sending the files securely, so the files might end up in the hands of 3rd parties as well.”

Readers Also Like:  High-tech tour: Gilchrist's U.P. visit spotlights innovation - Marquette Mining Journal

Another example analyzed by researchers is a simple Java snippet designed to download PuTTY, a common SSH and telnet client that runs covertly on a system using PowerShell. The script can be modified to download and run any program, including common malware families, researchers say.

The posts are consistent with the threat actor’s other posts, which include several scripts like automation of the post-exploitation phase, and a C++ program that attempts to phish for user credentials.

In short, this particular hacker spears to endeavor to show less technically capable hackers how to utilize ChatGPT for malicious purposes.

In another example of threat actors sharing how ChatGPT helped them create malware, a hacker posted on a forum a Python script which they said was the first they ever created with help from OpenAI. The script at first seems benign, but it actually includes several different functions that allow for simple modifications to turn the code into ransomware.

In Check Point’s third example of how ChatGPT can be used for malicious purposes, a cybercriminal posted about using the AI chat model to create a dark web marketplace to provide a platform for the automated trade of illegal or stolen goods such as financial information, malware or even drugs and weapons.

The company even asks ChatGPT itself how hackers can abuse OpenAi, and the chatbot replies that it is entirely possible. However, ChatGPT points to its own terms of service.

“It is important to note that OpenAI itself is not responsible for any abuse of its technology by third parties,” the chat responded to Check Point researchers. “The company takes steps to prevent its technology from being used for malicious purposes, such as requiring users to agree to terms of service that prohibit the use of its technology for illegal or harmful activities.”

Readers Also Like:  Tech Vendors: What A True Two-Way Channel Relationship Brings To Your Business - Forbes





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.