security

F5 Warns Australian IT of Social Engineering Risk Escalation Due to … – TechRepublic


Experts from security firm F5 have argued that cyber criminals are unlikely to send new armies of generative AI-driven bots into battle with enterprise security defences in the near future because proven social engineering attack methods will be easier to mount using generative AI.

The release of generative AI tools, such as ChatGPT, have caused widespread fears that democratization of powerful large language models could help bad actors around the world supercharge their efforts to hack businesses and steal or hold sensitive data hostage.

F5, a multicloud security and application delivery provider, tells TechRepublic that generative AI will result in a growth in social engineering attack volumes and capacity in Australia, as threat actors deliver a higher volume of better quality attacks to trick IT gatekeepers.

Jump to:

Social engineering attacks will grow and become better

Dan Woods, global head of intelligence at F5

Dan Woods.
Dan Woods

Global head of intelligence at F5, Dan Woods said he is less worried about AI resulting in “killer robots” or a “nuclear holocaust” than some. But he is “very concerned about generative AI.” Woods says the biggest threat facing both enterprises and people is social engineering.

Australian IT leaders only need to interact with a tool such as ChatGPT, Woods said, to see how it can mount a persuasive argument on a topic as well as a persuasive counter argument — and do it all with impeccable writing skills. This was a boon for bad actors around the world.

“Today, one person can socially engineer somewhere between 40 and 50 people at a time,” Woods said. “With generative AI — and the ability to synthesize the human voice — one criminal could start to social engineer almost an unlimited number of people a day and do it more effectively.”

SEE: DEF CON’s generative AI hacking challenge explored the cutting edge of security vulnerabilities.

Things Australian IT leaders have been teaching employees to consider red flags in phishing or smishing attacks, such as problems with grammar, spelling and syntax, “will all go away.”

“We will see phishing and smishing attacks that will not have mistakes any more. Criminals will be able to write in perfect English,” Woods said. “These attacks could be well structured in any language — it is very impressive. So I worry about social engineering and phishing attacks.”

There were already a total of 76,000 cyber crime reports in Australia in the 2021–22 financial year, according to Australian Cyber Security Centre data — up 13% on the previous financial year (Figure A). Many of these attacks involved social engineering techniques.

Figure A

Reports of Australian cybercrime increased in the 2021–22 financial year.
Reports of Australian cybercrime increased in the 2021–22 financial year. Image: ACSC

Enterprises on the receiving end of attack growth

Australian IT teams can expect to be on the receiving end of social engineering attack growth. F5 said the main counter to changing bad actor techniques and capabilities will be education to ensure employees are made aware of increasing attack sophistication due to AI.

Readers Also Like:  Security experts say that Apple's Lockdown Mode blocked NSO spyware - Ghacks

“Scams that trick employees into doing something — like downloading a new version of a corporate VPN client or tricking accounts payable to pay some nonexistent merchant — will continue to happen,” Woods said. “They will be more persuasive and increase in volume.”

Woods added that organizations will need to ensure protocols are put in place, similar to existing financial controls in an enterprise, to guard against criminals’ growing persuasive power. This could include measures such as payments over a certain amount requiring multiple people to approve.

Bad actors will choose social engineering over bot attacks

An AI-supported wave of bot attacks may not be as imminent as the social engineering threat.

There have been warnings that armies of bots, supercharged by new AI tools, could be utilized by criminal organizations to launch more sophisticated automated attacks against enterprise cybersecurity defences, expanding a new front in organisations’ war against cyber criminals.

Threat actors only rise to level of security defence sophistication

However, Woods said that, based on his experience, bad actors tend to use only the level of sophistication required to launch successful attacks.

“Why throw additional resources at an attack if an unsophisticated attack method is already being successful?” he asked.

Woods, who has held security roles with the CIA and FBI, likens this to the art of lock picking.

“A lock picking expert can be equipped with all of the specialized advanced tools required to pick locks, but if the door is unlocked they don’t need them — they will just open the door,” Woods said. “Attackers are very much the same way.

“We are not really seeing AI launching bot attacks — it’s easier to move on to a softer target than use AI against, for example, an F5-protected layer.”

Organizations can expect “a profound and alarming impact on criminal activity,” but not on all criminal activity simultaneously.

“It is not until enterprises are protected by sophisticated countermeasures that we will see a rise in more sophisticated AI attacks,” Woods said.

Criminals will gravitate to less cyber-aware Australian sectors

This lock picking principle applies to the distribution of attacks across Australian enterprises. Jason Baden, F5’s regional vice president for Australia and New Zealand, said Australia remained a lucrative target for bad actors, and attacks were shifting to less protected sectors.

Jason Baden, regional vice president for Australia and New Zealand at F5

Readers Also Like:  Fc Centripetal celebrates the growth and achievements of Uno, a ... - PR Newswire
Jason Baden.
Jason Baden

“F5’s customer base in sectors like banking and finance, government and telecommunications, who are the traditional large targets, have been spending a lot of money and a lot of time and effort for many years to secure networks,” Baden said. “Their understanding is very high.

“Where we have seen the biggest increase over the last 12 months is in sectors that weren’t previously targeted, including education, health and facilities management. They are actively being targeted because they haven’t spent as much money on their security networks.”

Enterprises will improve cybersecurity defences with AI

IT teams will be just as enthusiastic about using the growing power of artificial intelligence to outwit bad actors. For example, there are AI and machine learning tools that make human-like decisions based on models in areas such as fraud detection.

To deploy AI to detect fraud, a customer fraud file must be fed into a machine learning model. Because the fraud file contains transactions tied to a confirmed fraud, it teaches the model what fraud looks like, which it uses to identify future incidents of fraud in real time.

SEE: Explore our comprehensive artificial intelligence cheat sheet.

“The fraud would not need to look exactly like previous incidents, but just have enough attributes in common that it can identify future fraud,” Woods said. “We have been able to identify a lot of future fraud and prevent fraud, with some clients seeing return on investment in months.”

However, Australian enterprises looking at using AI to counter criminal activity need to be aware that the decision-making capabilities of AI models are only as good as the data being fed into them: Woods said organizations should really be aiming to train the models on “perfect data.”

“First of all, many enterprises will not have a fraud file. Or in some cases they might have a few hundred entries on it, 20% of which are false positives,” Woods said. “But if you go ahead and deploy that model, it will mean mitigating action will be taken on more of your good customers.”

Success will be as much about people as tools

IT leaders will need to ensure they don’t forget that people are another key ingredient in success with AI models, in addition to having copious amounts of clean data for labelling.

“You need humans. AI is not ready to be blindly trusted to make decisions on security,” Woods said. “You need people who are able to pour over the alerts, the decisions, to ensure AI is not making any false positives, which may have an impact on certain people.”

Readers Also Like:  MSU visits United Nations FAO to celebrate launch of aquaculture ... - Mississippi State University

Australia will continue to attract attention from threat actors

IT professionals could be in the middle of a growing AI war between hackers and enterprises. F5’s Jason Baden said that, due to Australia’s relative wealth, it will remain a heavily targeted jurisdiction.

“We will often see threats come through first into Australia because of the economic benefits of that,” Baden said. “This conversation is not going away, it will be front of mind in Australia.”

Cybersecurity education will be required to combat threats

This will mean continued education on cybersecurity is needed. Baden said this is because “if it is not generative AI today, it could be something else tomorrow.” Business stakeholders, including boards, need to know that, despite money invested, they could never be 100% secure.

“It has to be education at all levels of an organization. We cannot assume customers are aware, but there are also experienced business people not exposed to cybersecurity,” Baden said. “They (boards) are investing the time to get to the bottom of it, and in some cases there’s a hope to fix it with money or buy a product and it will go away. But it is a long-term play.”

F5 supports the actions of the Federal Government to further build Australian cybersecurity resilience, including through six announced Cyber Shields.

“Anything that is continuing to increase awareness of what the threats are is always going to be of benefit,” Baden said.

Less complexity could help win the war against bad actors

While there is no way to be 100% secure, simplicity could help organizations minimize risks.

“Enterprises often have contracts with dozens of different vendors,” Woods said. “What enterprises should be doing is reducing that level of complexity, because it breeds vulnerability. That’s what bad actors exploit every day, is confusion due to complexity.”

In terms of the cloud, for example, Woods said organizations didn’t set out to be multicloud, but the reality of business and life caused them to be multicloud over time.

SEE: Australian and New Zealand enterprises are facing pressure to optimize cloud strategies.

“They need a layer of extraction over all these clouds, with one policy that applies to all clouds, private and public,” Woods said. “There is now a huge trend towards consolidation and simplification to enhance security.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.