technology

ChatGPT’s ‘evil twin’ is helping hackers plan advanced cyberattacks


WormGPT is an AI module with features beyond a traditional chatbot including unlimited character support, chat memory retention, and code formatting capabilities (Picture: Getty Images/iStockphoto)

An Artificial Intelligence (AI) chatbot with no rules is a dangerous thing and a new ChatGPT-style tool is making it easier for hackers to plan sophisticated cyberattacks.

Cybersecurity firm SlashNext recently reported a new chatbot called ‘WormGPT’ making the rounds on cybercrime forums on the dark web ‘designed specifically for malicious activities’.

WormGPT is an AI module with features beyond a traditional chatbot including unlimited character support, chat memory retention and it will answer potentially illegal queries unlike ChatGPT or Bard.

The tool was allegedly trained on a wide range of data sources, particularly concentrating on malware-related data.

Researchers found it ‘unsettling’ that WormGPT could produce an email that was not only persuasive but also ‘strategically cunning’, showcasing its potential for sophisticated phishing attacks.

Essentially, it’s ChatGPT without ‘ethical boundaries or limitations’ and poses a threat in the hands of even the most inexperienced cyber criminals.

‘When ChatGPT emerged at the end of last year the dark web was awash with discussions on how it could be corrupted and harnessed as a criminal asset,’ said Adrianus Warmenhoven, cybersecurity expert at NordVPN.

A ChatGPT-style tool is making it easier for hackers to plan sophisticated cyberattacks (Picture: Olivier Morin/AFP via Getty Images)

‘In particular, hackers were keen to exploit the humanlike qualities of the language model to create more authentic phishing emails, and its programming ability to develop new malware,’ 

NordVPN has warned that the chatbot has led to a rise in ‘Grandma Exploits’, where illegal information was sought indirectly by being wrapped inside a much more innocent request like a letter to a relative.    

‘Early examples of its [WormGPT’s] phishing emails suggest that it will be a powerful weapon for social engineering, and particularly focused upon businesses that can provide big paydays for ransomware gangs,’ said Warmenhoven.

Readers Also Like:  Vintage package holidays blamed for frightening new record

Experts say that an AI chatbot without safeguards has ‘huge’ potential for other types of crime.

‘Without the protections and censorship that ChatGPT has to abide by, WormGPT can deliver cyber attacks at scale and also become a production line for fake news,’

‘Hopefully, a rapid international response from police authorities can discover the creators of this bad bot and help to prevent this worm from turning the AI dream into a nightmare.’


MORE : Sarah Silverman ‘suing ChatGPT and LLaMA creators’ over copyright claims


MORE : ChatGPT creators form ‘Terminator’ team to protect humanity from AI apocalypse





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.