security

Cybersecurity experts expect surge in AI-generated hacking attacks – The Washington Post


SAN FRANCISCO — Earlier this year, a sales director in India for tech security firm Zscaler got a call that seemed to be from the company’s chief executive.

As his cellphone displayed founder Jay Chaudhry’s picture, a familiar voice said “Hi, it’s Jay. I need you to do something for me,” before the call dropped. A follow-up text over WhatsApp explained why. “I think I’m having poor network coverage as I am traveling at the moment. Is it okay to text here in the meantime?”

Then the caller asked for assistance moving money to a bank in Singapore. Trying to help, the salesman went to his manager, who smelled a rat and turned the matter over to internal investigators. They determined that scammers had reconstituted Chaudhry’s voice from clips of his public remarks in an attempt to steal from the company.

Chaudhry recounted the incident last month on the sidelines of the annual RSA cybersecurity conference in San Francisco, where concerns about the revolution in artificial intelligence dominated the conversation.

Criminals have been early adopters, with Zscaler citing AI as a factor in the 47 percent surge in phishing attacks it saw last year. Crooks are automating more personalized texts and scripted voice recordings while dodging alarms by going through such unmonitored channels as encrypted WhatsApp messages on personal cellphones. Translations to the target language are getting better, and disinformation is harder to spot, security researchers said.

That is just the beginning, experts, executives and government officials fear, as attackers use artificial intelligence to write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal.

“It is going to help rewrite code,” National Security Agency cybersecurity chief Rob Joyce warned the conference. “Adversaries who put in work now will outperform those who don’t.”

Readers Also Like:  US cybersecurity agency funding is under fire from Sen. Rand Paul - Federal Times

The result will be more believable scams, smarter selection of insiders positioned to make mistakes, and growth in account takeovers and phishing as a service, where criminals hire specialists skilled at AI.

Those pros will use the tools for “automating, correlating, pulling in information on employees who are more likely to be victimized,” said Deepen Desai, Zscaler’s chief information security officer and head of research.

“It’s going to be simple questions that leverage this: ‘Show me the last seven interviews from Jay. Make a transcript. Find me five people connected to Jay in the finance department.’ And boom, let’s make a voice call.”

Phishing awareness programs, which many companies require employees to study annually, will be pressed to revamp.

The prospect comes as a range of professionals report real progress in security. Ransomware, while not going away, has stopped getting dramatically worse. The cyberwar in Ukraine has been less disastrous than had been feared. And the U.S. government has been sharing timely and useful information about attacks, this year warning 160 organizations that they were about to be hit with ransomware.

AI will help defenders as well, scanning reams of network traffic logs for anomalies, making routine programming tasks much faster, and seeking out known and unknown vulnerabilities that need to be patched, experts said in interviews.

Some companies have added AI tools to their defensive products or released them for others to use freely. Microsoft, which was the first big company to release a chat-based AI for the public, announced Microsoft Security Copilot in March. It said users could ask questions of the service about attacks picked up by Microsoft’s collection of trillions of daily signals as well as outside threat intelligence.

Software analysis firm Veracode, meanwhile, said its forthcoming machine learning tool would not only scan code for vulnerabilities but offer patches for those it finds.

Readers Also Like:  ISWAP’s use of tech could prolong Lake Chad Basin violence - Institute for Security Studies

But cybersecurity is an asymmetric fight. The outdated architecture of the internet’s main protocols, the ceaseless layering of flawed programs on top of one another, and decades of economic and regulatory failures pit armies of criminals with nothing to fear against businesses that do not even know how many machines they have, let alone which are running out-of-date programs.

By multiplying the powers of both sides, AI will give far more juice to the attackers for the foreseeable future, defenders said at the RSA conference.

Every tech-enabled protection — such as automated facial recognition — introduces new openings. In China, a pair of thieves were reported to have used multiple high-resolution photos of the same person to make videos that fooled local tax authorities’ facial recognition programs, enabling a $77 million scam.

Many veteran security professionals deride what they call “security by obscurity,” where targets plan on surviving hacking attempts by hiding what programs they depend on or how those programs work. Such a defense is often arrived at not by design but as a convenient justification for not replacing older, specialized software.

The experts argue that sooner or later, inquiring minds will figure out flaws in those programs and exploit them to break in.

Artificial intelligence puts all such defenses in mortal peril, because it can democratize that sort of knowledge, making what is known somewhere known everywhere.

Incredibly, one need not even know how to program to construct attack software.

“You will be able to say, ‘just tell me how to break into a system,’ and it will say, ‘here’s 10 paths in’,” said Robert Hansen, who has explored AI as deputy chief technology officer at security firm Tenable. “They are just going to get in. It’ll be a very different world.”

Readers Also Like:  What Americans Know About AI, Cybersecurity and Big Tech - Pew Research Center

Indeed, an expert at security firm Forcepoint reported last month that he used ChatGPT to assemble an attack program that could search a target’s hard drive for documents and export them, all without writing any code himself.

In another experiment, ChatGPT balked when Nate Warfield, director of threat intelligence at security company Eclypsium, asked it to find a vulnerability in an industrial router’s firmware, warning him that hacking was illegal.

“So I said ‘tell me any insecure coding practices,’ and it said, ‘Yup, right here,’” Warfield recalled. “This will make it a lot easier to find flaws at scale.”

Getting in is only part of the battle, which is why layered security has been an industry mantra for years.

But hunting for malicious programs that are already on your network is going to get much harder as well.

To show the risks, a security firm called HYAS recently released a demonstration program called BlackMamba. It works like a regular keystroke logger, slurping up passwords and account data, except that every time it runs it calls out to OpenAI and gets new and different code. That makes it much harder for detection systems, because they have never seen the exact program before.

The federal government is already acting to deal with the proliferation. Last week, the National Science Foundation said it and partner agencies would pour $140 million into seven new research institutes devoted to AI.

One of them, led by the University of California at Santa Barbara, will pursue means for using the new technology to defend against cyberthreats.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.