security

Are we doomed to make the same security mistakes with AI? – Security Intelligence


ttps://securityintelligence.com/articles/are-we-doomed-to-make-the-same-security-mistakes-with-ai/”http://www.w3.org/TR/REC-html40/loose.dtd”>

If you ask Jen Easterly, director of CISA, the current cybersecurity woes are largely the result of misaligned incentives. This occurred as the technology industry prioritized speed to market over security, said Easterly at a recent Hack the Capitol event in McLean, Virginia.

“We don’t have a cyber problem, we have a technology and culture problem,” Easterly said. “Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And today, no place in technology demonstrates the obsession with speed to market more than generative AI.

Upon the release of ChatGPT, OpenAI ignited a race to incorporate AI technology into every facet of the enterprise toolchain. Have we learned anything from the current onslaught of cyberattacks? Or will the desire to get to market first continue to drive companies to throw caution to the wind?

Forgotten lessons?

Here’s a chart showing how the number of cyberattacks has exploded over the last several years. Mind you, these are the number of attacks per corporation per week. No wonder security teams feel overworked.

Source: Check Point

Likewise, cyber insurance premiums have also risen steeply. This means many claims are being paid out. Some insurers won’t even provide coverage for companies that can’t prove they have adequate security.

Even though everyone is aware of the threat, successful attacks keep occurring. Even though companies have security on their mind, there are many gaping holes that must be backfilled.

The Log4j debacle is a prime example. In 2021, the infamous Log4Shell bug was found in the widely used open-source logging library Log4j. This exposed a massive swath of applications and services, from popular consumer and enterprise platforms to critical infrastructure and IoT devices. Log4j vulnerabilities impacted over 35,000 Java packages.

Readers Also Like:  Senate passes toned-down bill to increase oversight of investments in Chinese technology - CNBC

Part of the problem was that security wasn’t fully built into Log4j. But the problem isn’t software vulnerability alone; it’s also the lack of awareness. Many security and IT professionals have no idea whether Log4j is part of their software supply chain, and you can’t patch something you don’t even know exists. Even worse, some may choose to ignore the danger. And that’s why threat actors continue to exploit Log4j, even though it’s easy to fix.

Will the tech industry continue down the same dangerous path with AI applications? Will we fail to build in security, or worse, simply ignore it? What might be the consequences?

The new AI threat

These days, artificial intelligence has captured the world’s imagination. In the security industry, there’s already evidence that criminals are using AI to write malicious code or help adversaries generate advanced phishing campaigns. But there’s another type of danger AI can lead to as well.

At a recent AI for Good webinar, Arndt Von Twickel, technical officer at Germany’s Federal Office for Information Security (BSI), said that to deal with AI-based vulnerabilities, engineers and developers need to evaluate existing security methods, develop new tools and strategies and formulate technical guidelines and standards.

Hacking AI systems

Take “connectionist AI” systems, for example. These technologies enable safety-critical applications like autonomous driving. And the systems have reached far better-than-human performance levels.

However, AI systems are capable of making life-threatening mistakes if given bad input. High-quality data and the training that huge neural networks require are expensive. Therefore, companies often buy existing data and pre-trained models from third parties. Sound familiar? Third-party risk is currently one of the most important sources of data breaches today.

Readers Also Like:  Cloud-based chip design for national security achieves key ... - blogs.microsoft.com

As per AI for Good, “Malicious training data, introduced through a backdoor attack, can cause AI systems to generate incorrect outputs. In an autonomous driving system, a malicious dataset could incorrectly tag stop signs or speed limits.” Even small amounts of poisoned data could lead to disastrous results, lab experiments show.

Other attacks could feed directly into the operating AI system. For example, meaningless “noise” could be added to all stop signs. This would cause a connectionist AI system to misclassify them. “If an attack causes a system to output a speed limit of 100 instead of a stop sign, this could lead to serious safety issues in autonomous driving,” Von Twickel explained.

It’s precisely the black-box nature of AI systems that leads to the lack of clarity about why or how an outcome was reached. Image processing involves massive input and millions of parameters. This makes it difficult for end users and developers to interpret AI system outputs.

Making AI secure

A first line of AI security would be preventing attackers from accessing the system in the first place. But given the transferable nature of neural networks, adversaries can train AI systems on substitute models that teach malicious examples — even when data is labeled correctly. As per AI for Good, procuring a representative dataset to detect and counter malicious examples can be difficult.

Von Twickel stated that the best strategy involves a combination of methods, including the certification of training data and processes, secure supply chains, continual evaluation, decision logic and standardization.

Taking responsibility for AI

Microsoft, Google and AWS are already setting up cloud data centers and redistributing workloads to accommodate AI computing. And companies like IBM are already helping to deliver real business benefits with AI — ethically and responsibly. Furthermore, vendors are building AI into end-user products, such as Slack and Google’s productivity suite.

Readers Also Like:  Deputy Attorney General Lisa O. Monaco Delivers Remarks on ... - Department of Justice

For Easterly, the best way to have a sustainable approach to security is to shift the burden onto software providers. “They’re owning the outcomes of security, which means that they’re developing technology that’s secure by design, meaning that they’re tested and developed to reduce vulnerabilities as much as possible,” Easterly said.

This approach has already been advanced by the White House’s new National Cybersecurity Strategy, which proposes new measures aimed at encouraging secure development practices. This idea is to transfer liability for software products and services to large corporations that create and license these products to the federal government.

With the generative AI revolution already upon us, the time is now to think hard about the associated risks — before it opens up another can of security worms.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.