security

Generative AI Can Write Phishing Emails, But Humans Are Better At … – TechRepublic


Hacker Stephanie “Snow” Carruthers and her team found phishing emails written by security researchers saw a 3% better click rate than phishing emails written by ChatGPT.

An IBM X-Force research project led by Chief People Hacker Stephanie “Snow” Carruthers showed that phishing emails written by humans have a 3% better click rate than phishing emails written by ChatGPT.

The research project was performed at one global healthcare company based in Canada. Two other organizations were slated to participate, but they backed out when their CISOs felt the phishing emails sent out as part of the study might trick their team members too successfully.

Jump to:

Social engineering techniques were customized to the target business

It was much faster to ask a large language model to write a phishing email than to research and compose one personally, Carruthers found. That research, which involves learning companies’ most pressing needs, specific names associated with departments, and other information used to customize the emails, can take her X-Force Red team of security researchers 16 hours. With a LLM, it took about five minutes to trick the generative AI chatbot into creating convincing and malicious content.

SEE: A phishing attack called EvilProxy takes advantage of an open redirector from the legitimate job search site Indeed.com. (TechRepublic)

In order to get ChatGPT to write an email that lured someone into clicking a malicious link, the IBM researchers had to prompt the LLM. They asked ChatGPT to draft a persuasive email (Figure A) taking into account the top areas of concern for employees in their industry, which in this case was healthcare. They instructed ChatGPT to use social engineering techniques (trust, authority and proof) and marketing techniques (personalization, mobile optimization and a call to action) to generate an email impersonating an internal human resources manager.

Readers Also Like:  Congressman Tours Quantum Lab, Research Security Operation ... - Stony Brook News

Figure A

A phishing email written by ChatGPT as prompted by IBM X-Force Red security researchers.
A phishing email written by ChatGPT as prompted by IBM X-Force Red security researchers. Image: IBM

Next, the IBM X-Force Red security researchers crafted their own phishing email based on their experience and research on the target company (Figure B). They emphasized urgency and invited employees to fill out a survey.

Figure B

A phishing email written by IBM X-Force Red security researchers.
A phishing email written by IBM X-Force Red security researchers. Image: IBM

The AI-generated phishing email had a 11% click rate, while the phishing email written by humans had a 14% click rate. The average phishing email click rate at the target company was 8%; the average phishing email click rate seen by X-Force Red is 18%. The AI-generated phishing email was reported as suspicious at a higher rate than the phishing email written by people. The average click rate at the target company was low likely because that company runs a monthly phishing platform that sends templated, not custom, emails.

The researchers attribute their emails’ success over the AI-generated emails to their ability to appeal to human emotional intelligence, as well as their selection of a real program within the organization instead of a broad topic.

How threat actors use generative AI for phishing attacks

Threat actors advertise tools such as WormGPT, a variant of ChatGPT that can answer prompts that would be otherwise blocked by ChatGPT’s ethical guardrails. IBM X-Force noted that “X-Force has not witnessed the wide-scale use of generative AI in current campaigns,” despite tools like WormGPT being present on the black hat market.

“While even restricted versions of generative AI models can be tricked to phish via simple prompts, these unrestricted versions may offer more efficient ways for attackers to scale sophisticated phishing emails in the future,” Carruthers wrote in her report on the research project.

Readers Also Like:  Warner Calls on AI Companies to Prioritize Security and Prevent ... - Senator Mark Warner

SEE: Hiring kit: Prompt engineer (TechRepublic Premium)

On the other hand, there are easier ways to phish, and attackers aren’t using generative AI very often.

“Attackers are highly effective at phishing even without generative AI … Why invest more time and money in an area that already has a strong ROI?” Carruthers wrote to TechRepublic.

Phishing is the most common infection vector for cybersecurity incidents, IBM found in its 2023 Threat Intelligence Index.

“We didn’t test it out in this project, but as generative AI grows more sophisticated it may also help augment open-source intelligence analysis for attackers. The challenge here is ensuring that data is factual and timely,” Carruthers wrote in an email to TechRepublic. “There are similar benefits on the defender’s side. AI can help augment the work of social engineers who are running phishing simulations at large organizations, speeding both the writing of an email and also the open-source intelligence gathering.”

How to protect employees from phishing attempts at work

X-Force recommends taking the following steps to keep employees from clicking on phishing emails.

  • If an email seems suspicious, call the sender and be sure the email is really from them.
  • Don’t assume all spam emails will have incorrect grammar or spelling; instead, look for longer-than-usual emails, which may be a sign of AI having written them.
  • Train employees on how to avoid phishing by email or phone.
  • Use advanced identity and access management controls such as multifactor authentication.
  • Regularly update internal tactics, techniques, procedures, threat detection systems and employee training materials to keep up with advancements in generative AI and other technologies malicious actors might use.
Readers Also Like:  What can you expect from Ubuntu 23.10? - TechHQ

Guidance for stopping phishing attacks was released on October 18 by the U.S. Cybersecurity and Infrastructure Security Agency, NSA, FBI and Multi-State Information Sharing and Analysis Center.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.