security

Biggest AI trends of 2024: According to top security experts – SC Media


Generative artificial intelligence headlines dominated 2023 and looks to do the same in 2024. Predictions from thought leaders at CrowdStrike, Intel 471, LastPass and Zscaler forecast how the technology will be used, abused and leveraged in surprising ways in the year ahead.

Curated perspectives assembled here go beyond the potential and perils of AI in 2024 and include how the technology will impact workforces, attack surfaces and create new data insecurities as companies struggle to manage new large language model (LLM) data pools.

As the boom of AI continues, the cybersecurity stakes are raised into the new year with a U.S. presidential election in November, a continued skills gap for the cybersecurity sector to contend with and the rise of ransomware threats – once again – worrying infosec pros.

2024 will bring a serious cyberattack or data breach related to AI

Mike Lieberman, Kusari CTO and cofounder, Kusari:

The rush to capitalize on the productivity benefits of AI has led to teams cutting corners on security. We’re seeing an inverse correlation between an open source AI/ML project’s popularity and its security posture. On the other end, AI will help organizations readily address cybersecurity by being able to detect and highlight common bad security patterns in code and configuration. Over the next few years, we will see AI improving to help provide guidance in more complex scenarios. However, AI/ML must be a signal – not a decision maker.

AI is a wild card

Michael DeBolt, chief intelligence officer, Intel 471:

Michael DeBolt

While there doesn’t appear to be a killer AI application for cybercriminals thus far, its power could be helpful for some of the mundane backend work that cybercriminals perform.

The advent of cybercrime-as-a-service, which is the term for the collective goods and services that threat actors supply to each other, is marked by an emphasis on specialization, scale and efficiency. For example, using LLMs to sort through masses of stolen data to figure out the most important data to mention when extorting a company. Or employing a chatbot to engage in preliminary ransom negotiations.

Another hypothetical innovation could be an AI tool that calculates the maximum ransom an organization will pay based on the data that is stolen. We reported a few examples of actors implementing AI in their offers during the second quarter of 2023, which included an initial access broker (IAB) offering free translation services using AI. In May 2023, we reported a threat actor offering a tool that allegedly could bypass ChatGPT’s restrictions.

AI and ML tools are capable of enabling impersonation via video and audio, which pose threats to identity and access management. Videos rendered using AI are fairly detectable now, but synthesized voice cloning is very much a threat to organizations that use voice biometrics as part of authentication flows. We still assess that AI cannot be fully relied upon for more intricate cybercrime, and doing so in its current form will likely render flawed results. But this area is moving so swiftly it’s difficult to see what is on the horizon.

The proliferation of open-source LLMs and services — some of which are being built with the intention of not having safety guardrails to prevent malicious use — means this area remains very much a wild card.

AI blind spots open the door to new corporate risks

Elia Zaitsev, CTO, CrowdStrike:

Elia Zaitsev

In 2024, CrowdStrike expects that threat actors will shift their attention to AI systems as the newest threat vector to target organizations, through vulnerabilities in sanctioned AI deployments and blind spots from employees’ unsanctioned use of AI tools.

After a year of explosive growth in AI use cases and adoption, security teams are still in the early stages of understanding the threat models around their AI deployments and tracking unsanctioned AI tools that have been introduced to their environments by employees. These blind spots and new technologies open the door to threat actors eager to infiltrate corporate networks or access sensitive data.

Critically, as employees use AI tools without oversight from their security team, companies will be forced to grapple with new data protection risks. Corporate data that is inputted into AI tools isn’t just at risk of threat actors targeting vulnerabilities in these tools to extract data, the data is also at risk of being leaked or shared with unauthorized parties as part of the system’s training protocol.

2024 will be the year when organizations will need to look internally to understand where AI has already been introduced into their organizations (through official and unofficial channels), assess their risk posture, and be strategic in creating guidelines to ensure secure and auditable usage that minimizes company risk and spend but maximizes value. 

GenAI will level-up the role of security analysts

Chris Meenan, vice president product management, IBM Security:

Chris Meenan

Companies have been using AI/ML to improve the efficacy of security technologies for years, but the introduction of generative AI will be aimed squarely at maximizing the human element of security. In this coming year, GenAI will begin to take on certain tedious, administrative tasks on behalf of security teams — but beyond this, it will also enable less experienced team members to take on more challenging, higher level tasks. For example, we’ll see GAI being used to to translate technical content, such as machine generated log data or analysis output, into simplified language that is more understandable and actionable for novice users. By embedding this type of GAI into existing workflows, it will not only free up security analysts time in their current roles, but enable them to take on more challenging work — alleviating some of the pressure that has been created by current security workforce and skills challenges.

Increase in sophisticated, personalized phishing and malware attacks

Ihab Shraim, CTO, CSC Digital Brand Services:

Phishing and malware continue to be the most used cyber threat vectors to launch attacks for fraud and data theft, especially when major events occur, and frenzied reactions are abound. In 2024, with the new rise of generative AI usage such as FraudGPT, cybercriminals will have a huge advantage launching phishing campaigns with speed combined with sophistication. ChatGPT will allow bad actors to craft phishing emails that are personalized, targeted, and free of spelling and grammatical errors which will make such emails harder to detect. Moreover, dark web AI tools will be easily available to allow for more complex, socially engineered deepfake attacks that manipulate the emotions and trust of targets at even faster rates. 

Readers Also Like:  AI Tech Leaders Head Back to Senate for Another Round of Talks - Bloomberg Law

Beyond phishing, the rise of LLMs will make the endpoint a prime target for cybercriminals in 2024

Dr. Ian Pratt, global head of security for personal systems at HP Inc.:

One of the big trends we expect to see in 2024 is a surge in use of generative AI to make phishing lures much harder to detect, leading to more endpoint compromise. Attackers will be able to automate the drafting of emails in minority languages, scrape information from public sites such as LinkedIn to pull information on targets, and create highly personalized social engineering attacks en masse. Once threat actors have access to an email account, they will be able to automatically scan threads for important contacts and conversations, and even attachments, sending back updated versions of documents with malware implanted, making it almost impossible for users to identify malicious actors. Personalizing attacks used to require humans, so having the capability to automate such tactics is a real challenge for security teams. Beyond this, continued use of ML-driven fuzzing, where threat actors can probe systems to discover new vulnerabilities. We may also see ML-driven exploit creation emerge, which could reduce the cost of creating zero-day exploits, leading their greater use in the wild. 

Simultaneously, we will see a rise in “AI PCs,” which will revolutionize how people interact with their endpoint devices. With advanced compute power, AI PCs will enable the use of “local Large Language Models (LLMs)” — smaller LLMs running on-device, enabling users to leverage AI capabilities independently from the internet. These local LLMs are designed to better understand the individual user’s world, acting as personalized assistants. But as devices gather vast amounts of sensitive user data, endpoints will be a higher risk target for threat actors.  

As many organizations rush to use LLMs for their chatbots to boost convenience, they open themselves up to users abusing chatbots to access data they previously wouldn’t have been able to. Threat actors will be able to socially engineer corporate LLMs with targeted prompts to trick them into overriding its controls and giving up sensitive information — leading to data breaches.

The menace – and promise – of AI

Alex Cox, director of LastPass’ threat intelligence, mitigation and escalation team:

AI is trending everywhere. We’ll continue to see that in 2024 and beyond. The capabilities unlocked by AI, from GenAI to deepfakes, will completely shift the threat environment.

Take phishing as an example. An obvious “tell” of a phishing email is imperfect grammar. Anti-phishing technologies are built with these errors in mind and are trained to find them. However, generative AI like ChatGPT has significantly fewer shortcomings when it comes to language — and, as malicious actors take advantage of these tools, we’re already beginning to see the sophistication of these attacks increase.

AI’s big data capabilities will also be a boon to bad actors. Attackers are excellent at stealing troves of information — emails, passwords, etc. — but traditionally they’ve had to sift through it to find the treasure. With AI, they’ll be able to pull a needle from a haystack instantaneously, identifying and weaponizing valuable information faster than ever.

Thankfully, security vendors are already building AI into their tools. AI will help the good guys sift through big data, detect phishing attempts, provide real-time security suggestions during software development and more.

Security professionals should help their leadership understand the AI landscape and its potential impacts on their organization. They should update employee education, like anti-phishing training and communicate with vendors about how they are securing against these new capabilities.

There will be a transition to AI-generated tailored malware and full-scale automation of cyberattacks

Adi Dubin, vice president of product management, Skybox Security:

Cybersecurity teams face a significant threat from the rapid automation of malware creation and execution using generative AI and other advanced tools. In 2023, AI systems capable of generating highly customized malware emerged, giving threat actors a new and powerful weapon. In the coming year, the focus will shift from merely generating tailored malware to automating the entire attack process. This will make it much easier for even unskilled threat actors to launch successful attacks. 

Securing AI tools to challenge teams

Dr. Chaz Lever, senior director, security research, Devo

It’s been a year since ChatGPT hit the scene, and since its debut, we’ve seen a massive proliferation in AI tools. To say it’s shaken up how organizations approach work would be an understatement. However, as organizations rush to adopt AI, many lack a fundamental understanding of how to implement the right security controls for it.

In 2024, security teams biggest challenge will be properly securing the AI tools and technologies their organizations have already onboarded. We’ve already seen attacks against GenAI models such as model inversion, data poisoning, and prompt injection; and as the industry adopts more AI tools, AI attack surfaces across these novel applications will expand. This will pose a couple challenges: refining the ways AI is used to help improve efficiency and threat detection while grappling with the new vulnerabilities these tools introduce. Add in the fact that bad actors are also using these tools to help automate development and execution of new threats, and you’ve created an environment ripe for new security incidents.

Just like any new technology, companies will need to balance security, convenience, and innovation as they adopt AI and ensure they understand the potential repercussions of it.

From threat prevention to prediction, cybersecurity nears a historic milestone

Sridhar Muppidi, CTO, IBM Security

As AI crosses a new threshold, security predictions at scale are becoming more tangible. Although early security use cases of generative AI focus on the front end, improving security analysts’ productivity, I don’t think we’re far from seeing generative AI deliver a transformative impact on the back end to completely reimagine threat detection and response into threat prediction and protection. The technology is there, and the innovations have matured. The cybersecurity industry will soon reach a historic milestone, achieving prediction at scale.

Readers Also Like:  Chill pervades China's tech firms even as crackdown eases - The Associated Press - en Español

The democratization of AI tools will lead to a rise in more advanced attacks against firmware and even hardware

Boris Balacheff, chief technologist for system security research and innovation at HP Inc.:

In 2024, powerful AI will be in the hands of the many, making sophisticated capabilities more accessible at scale to malicious actors. This will not only accelerate attacks in OS and application software, but also across more complex layers of the technology stack like firmware and hardware. Previously, would-be threat actors needed to develop or hire very specialist skills to develop such exploits and code, but the growing use of Generative AI has started to remove many of these barriers. This democratization of advanced cyber techniques will lead to an increase in the proliferation of more advanced, more stealthy, or more destructive attacks. We should expect more cyber events like moonbounce and cosmic strand, as attackers are able find or exploit vulnerabilities to get a foothold below a device Operating System. Recent security research even shows how AI will enable malicious exploit generation to create trojans all the way into hardware designs, promising increased pressure in the hardware supply chain.  

The political climate will continue to be in unchartered waters with disinformation, deep fakes and the advancement of AI

Ed Williams, regional VP of pen testing, EMEA at Trustwave:

In the U.S. election, databases were leaked in the past, and one can only assume that an attempt or possible success of a cyberattack will happen again.

AI has the ability to spread disinformation via deep fakes and in 2024 this will only continue to explode. Similarly, deep fakes and other misinformation are already prevalent today. This shows that many people do not check for authenticity, and what they see on their phones and social media becomes their idea of the truth, which only amplifies the impact. There is discourse on both sides of the aisle, particularly ahead of party elections. This alone will create an environment susceptible to spreading misinformation and encourage nation-states to interfere where they can.

Enhanced phishing tools will improve social engineering success rates

Nick Stallone, senior director, governance, risk and compliance leader, MorganFranklin Consulting:

2024 will see broader adoption of automated and advanced spear phishing/vishing tools. These tools, combined with enhanced and more accessible deep fake and voice cloning technology, will vastly improve social engineering success rates. This will lead to increased fraud and compromised credentials perpetrated through these methods. All industries must be aware of these improved methods and focus on incorporating updated controls, awareness, and training to protect against them as soon as possible.

On the other side, cybersecurity tools that incorporated machine learning and artificial intelligence over the past few years will also become more efficient in protecting organizations from these threats. The training models associated with these tools will have access to more data from increased adoption, leading to shorter implementation periods and exponential market growth in 2024. These tools will bring the greatest level of efficiencies and reduced costs for cybersecurity monitoring and assessments, allowing security teams to be more focused on their organization’s greatest risks.

Democratization of AI will be a double-edged sword for cybersecurity

Atticus Tysen, SVP and chief information security officer, Intuit:

While the democratization of AI shows great promise, its widespread availability poses an unprecedented challenge for cybersecurity. AI will evolve specific attacks against enterprises to become continuous, ubiquitous threats against businesses, individuals, and the infrastructure they rely upon. Even still, it will be a race against the threat actors to design resilient systems and protections. If we fail, the risk of successful hacks becoming commonplace and wreaking havoc in the near future is a clear and present danger. 

In 2024, English will become the best programming language for evil

Fleming Shi, CTO, Barracuda:

Fleming Shi

It was no surprise that coming into 2023, generative AI would be integrated into security stacks and solutions. However, the big surprise was how quickly generative AI has taken over every aspect of the technology space. This is concerning as we enter into 2024 because just as security professionals are using the new technology to add to their defenses, bad actors are doing the same. LLMs are extremely capable at writing code, but often come with guardrails that prevent it from writing malicious code. However, generative AI can be “fooled” into helping threat actors anyway – particularly when it comes to social engineering techniques. Rather than telling the tool to create an email phishing template, one only has to ask it to write a letter from a CEO asking for payment for an invoice. The slight changes in phrasing make these tools vulnerable, generally available, and extremely useful to bad actors everywhere. Because this process is so easy, 2024 will be the year that English becomes the best programming language for evil. 

AI attacks to give new meaning to ‘garbage in, garbage out’

Dave Shackleford, faculty and IANS Research, founder and principal consultant:

In 2024, we will definitely see emerging attacks against machine learning and AI models and infrastructure. With more and more organizations relying on cloud-based AI processing and data models, it’s likely that attackers will begin targeting these environments and data sets with disruptive attacks as well as data pollution strategies. Today, we have almost nothing in terms of defined attack paths and strategies for this within frameworks like MITRE ATT&CK, but that will likely change in the next year. These attacks will give new meaning to the classic maxim of “garbage in, garbage out” and we will need to learn to identify and defend against them.

Language models pose dual threats to software security

Andrew Whaley, the senior technical director, Promon:

Large language models (LLMs) have come a remarkable way over the past year; and with this so have bad actors’ reverse engineering capabilities. This poses two main threats: 1. Reverse engineering is now far easier, providing fledgling hackers with the capabilities typically associated with specialists. 2. The reduced effectiveness of traditional protection methods against automated deobfuscation attacks. This increases software vulnerability to malicious exploitation, and will lead to an expected rise in incidents, including high-value attacks. Examples include mass attacks against mobile banking apps, remote mobile OS takeover, and malware targeting smart devices.

Readers Also Like:  DOE Announces Intent To Fund Projects That Advance Domestic ... - National Energy Technology Laboratory

Deep faked CEOs

Navroop Mitter, CEO of ArmorText:

Dramatic improvements in the quality of generated voice and video of real-life persons coupled with further improvements in GenAI and LLMs to automatically assess and replicate the nuances of how individuals communicate, both orally and in written form, will enable novel attacks for which most organizations are severely underprepared.

Over the next 24 months organizations will face attackers mimicking their executives not just by email spoofing, but perfect AI driven mimicry of their voice, likeness, and diction and this will present multiple challenges, but most especially during incident response. How will companies distinguish between the Real McCoy and a near perfect imposter amidst the chaos of a crisis?

Existing policies and procedures designed around handling rogue executives won’t apply because the Real McCoy is still present and very much needed in these conversations.

Businesses will need to learn to hide their attack surface at a data level

Sam Curry, VP and CISO at Zscaler:

The influx of generative AI tools such as ChatGPT has forced businesses to realize that if their data is available in the cloud/internet, then it can be used by generative AI and therefore competitors. If organizations want to avoid their IP from getting utilized by gen AI tools then they will need to ensure their attack surface is now hidden on a data level rather than just at an application level. 

Based on the rapid adoption of gen AI tools we predict businesses will accelerate their efforts to classify all their data into risk categories and implement proper security measures to prevent leakage of IP.

API security evolves as AI enhances offense-defense strategies

Shay Levi, CTO and co-founder, Noname Security:

In 2023, AI began transforming cybersecurity,  playing pivotal roles both on the offensive and defensive security fronts. Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention. AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks. We will witness a pivotal shift as AI becomes a powerful tool for both malicious actors and defenders, redefining the dynamics of digital security.

The emergence of ‘poly-crisis’ due to pervasive AI-based cyberattacks

Agnidipta Sarkar, VP CISO Advisory, ColorTokens

We saw the emergence of AI in 2022, and we saw the emergence of misuse of AI as an attack vector, helping make phishing attempts sharper and more effective. In 2024, I expect cyberattacks to become pervasive as enterprises transform. It is possible today to entice AI enthusiasts to fall prey to AI prompt injection. Come 2024, perpetrators will find it easier to use AI to attack not only traditional IT but also cloud containers and, increasingly, ICS and OT environments, leading to the emergence of a “poly-crisis” that threatens not only financial impact but also impacts human life simultaneously at the same time in cascading effects. Critical Computing Infrastructure will be under increased threat due to increasing geo-political threat. Cyber defense will be automated, leveraging AI to adapt to newer attack models.

AI developments for threat actors will lead to nearly real-time detection methods

Mike Spanbauer, Field CTO, Juniper Networks:

AI will continue to prove dangerous in the hands of threat actors, accelerating their ability to write and deliver effective threats. Organizations must adapt how they approach defense measures and leverage new, proven methods to detect and block threats. We will see the rise of nearly real-time measures that can identify a potentially malicious file or variant of a known threat at line rate.

AI won’t be used for full-fledged attacks, but social engineering attacks to proliferate

Etay Maor, senior director of security strategy, Cato Networks:

No, there won’t be a wave of AI based attacks — while AI has been getting a lot of attention ever since the introduction of ChatGPT, we are not even close to seeing a full-fledged AI based attack. You don’t even have to take my word for it — the threat actors on major cybercrime forums are saying it as well. Hallucinations, model restrictions, and the current maturity level of LLMs are just some of the reasons this issue is actually a non-issue at this point in time.

But, we should expect to see LLMs being used for expediting and perfecting small portions or tasks of the attacks, be it email creation, help with social engineering by creating profiles or documents and more. AI is not going to replace AI, but people who know how to use AI will replace those who don’t.

The balance of attackers and defenders continuously tested

Kayla Williams, CISO, Devo:

This one may be a no-brainer, but it must be said again and again. Bad actors will use AI/ML and other advanced technologies to create sophisticated attack tactics and techniques. They’ll use these tools to pull off more and faster attacks, putting increased pressure on security teams and defense systems. The pace of progress is equally fast on both sides — defenders and attackers — and that balance will continually be tested in the coming year.

AI accelerates social engineering attacks

Kevin O’Connor, head of threat research at Adlumin:

Commercially available and open-source AI capabilities — including Large Language Models (LLMs) like ChatGPT and LLaMA, and countless variants — will help attackers in designing thought out and effective social engineering campaigns. With AI systems increasingly integrating with troves of personal information through social media sites from LinkedIn to Reddit, we’ll see the ability for even low-level attackers to create targeted and convincing social-engineering based campaigns.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.