Cybercriminals are turning to ChatGPT to generate extremely convincing phishing emails, researchers have warned – so how can internet users spot the scams?
Cybersecurity company Norton warned that criminals are turning to AI tools such as ChatGPT to create ‘lures’ to rob victims.
SCROLL DOWN FOR THE GUIDE
AI tools such as ChatGPT make it far harder to spot scams (Alamy)
A report in New Scientist suggested that using ChatGPT to generate emails could cut costs for cybercrime gangs by up to 96 percent.
ChatGPT also completely removes the language barrier for cybercriminal gangs around the world, warns Julia O’Toole, CEO of MyCena Security Solutions.
O’Toole said there are still ways to spot scam emails generated by AI tools, but the technology makes it far more difficult to spot scam emails.
She saID: ‘Phishing has come on significantly since email scams first hit inboxes, but a lack of proficiency in language and culture has still been a major barrier for scammers, who have struggled to make their emails realistic.
‘While they still duped innocent people, many internet users were able to spot the spoof and delete them.
But those days are gone, she said.
ChatGPT is the ‘hottest topic’ on the dark web at present, according to O’Toole, as cybercriminals work out how to use it to defraud victims.
There are protections built into ChatGPT which are meant to stop it from being used in scams – but criminals are working out how to circumvent these.
She said: ‘The quality and speed of execution of ChatGPT makes it a powerful productivity hack.
‘With it, criminals can now multiply complex phishing campaigns, generating emails faster with higher chances of success.’
O’Toole warns that ChatGPT’s ability to generate accurate content means it can effectively impersonate anyone – and warns that AI tools that can access internet content are potentially a ‘weapon of cyber mass destruction’.
She said: ‘Hackers can use ChatGPT to trick people into giving up their usernames and passwords for their online accounts, or it can trick people into sending money or divulging personal information to criminals while deceiving them into thinking it is for legitimate purposes.
Cybercriminals can use complex prompts to gather the information needed to craft a ‘bespoke’ cyber attack, she warned.
‘When criminals use ChatGPT, there are no cultural barriers. When the target receives an email from their “apparent” bank or CEO, there are no language tell-tale signs the email is bogus.
‘The tone, context and reason to carry out the bank transfer give no evidence to suggest the email is a scam.’
Ever since it launched in November 2022, ChatGPT has fascinated the cybercriminal community.
Posters on notorious cybercrime forums have discussed using the bot to create malware and even create new dark web marketplaces for the sale of stolen credit cards and other illegal goods.
There are multiple fake ChatGPT apps which harvest user data – and cybersecurity vendor BitDefender spotted a phishing scam where users were directed to a fake ChatGPT to harvest bank details.
Cybersecurity vendor Norton warned that phishing emails are the tip of the iceberg – and that cybercriminals could use ChatGPT or similar software to create entirely fake chatbots to con internet users out of their money.
According to analytics firm SimilarWeb, ChatGPT averaged 13 million users per day in January, making it the fastest-growing internet app of all time.
It took TikTok about nine months after its global launch to reach 100 million users, and Instagram more than two years.
OpenAI, a private company backed by Microsoft Corp, made ChatGPT available to the public for free in late November.
The five ways to spot phishing emails generated by AI
Spotting phishing emails generated by ChatGPT is far harder than spotting than spotting those generated by human beings, says Julia O’Toole, CEO of MyCena Security Solutions.
Here are five ways to spot that an email is a scam:
Hover over the email address to check it
Julia O’Toole, CEO of MyCena Security Solutions
On a PC, you can ‘hover’ your mouse over a ‘Contact Us’ link to see where your email is really going, O’Toole says.
With any suspicious email, hover over the email address and check that it’s really from the domain (ie website address) you’d expect.
O’Toole says, ‘Even despite the sophistication of ChatGPT, the email addresses used by phishers remain the same, so if it looks suspicious, it probably is.’
Keep context in mind
If your bank or any other institution contacts you asking for information urgently, you should immediately be on alert.
Think about the context – why do they need this information? Why now?
O’Toole says, ‘Banks and security-savvy institutions avoid putting their customers in positions where confidential information is asked for instantly.’
Avoid hyperlinks
Hyperlinks to bank websites embedded in an email might seem like an easy way of doing things – but a legitimate bank will also allow you to phone up.
O’Toole says, ‘If an email arrives asking for personal information, never click on the link. Verify its authenticity first.
‘For example, if your bank contacts you via email asking for personal information, hang up and call back the bank via the phone number found on their website.’
Pay attention to the artwork
ChatGPT might be able to generate clear copy, but criminal gangs won’t have access to the correct digital assets.
That means that everything from page headers to the links you are asked to click may well look wrong.
O’Toole says, ‘Attackers will often cut and paste images of a company directly from the internet, but this will distort the image and make it look faded or out of focus. If any images or artwork in an email look poor quality, this could also suggest it’s a phish.
Check any email against the legitimate website
While ChatGPT is great at generating text, it isn’t great at finer details, which could indicate an email is malicious, O’Toole warns.
She says, ‘When you receive an email that you are concerned about, visit the apparent sender’s website directly. Are there phrases or branding that they tend to use in communications? Are these details included in the email?’
If anything looks suspicious, it probably is.