security

Meet MLSecOps: industry calls for new measures to secure AI – TechTarget


National cybersecurity and open source community leaders meeting in Washington this week as well as a product release from a DevSecOps vendor each highlighted a trend emerging at the nexus of two hot topics in tech: AI and open source security.

The industry’s AI craze has reached an all-time high this year with the rise of large language models (LLMs) and generative AI. But other forms such as machine learning have also become more entrenched in IT shops recently. In the open source world, community-driven AI models have emerged, including the Linux Foundation’s Falcon LLM. Developers can use community hubs such as Hugging Face to download open source AI models and associated packages that run them. It’s here that AI can become a cybersecurity problem, according to the Open Source Security Foundation (OpenSSF).

As the OpenSSF kicked off its Secure Open Source Software (SOSS) Summit in Washington, D.C. this week, secure AI — and the use of AI to improve cybersecurity — were among the major topics discussed with officials from the National Security Council, Office of the National Cyber Director, and the Cybersecurity and Infrastructure Security Agency, according to an OpenSSF press release.

“Participants [in] the SOSS Summit … discussed the need for a comprehensive secure software workbench for OSS developers and kickstarted the exploration of the nexus between OSS, security, and AI,” the release stated. It included a list of objectives in this area:

  1. Supply chain security of OSS packages (e.g., PyTorch) used in AI.
  2. Security of open sourced AI packages (e.g., Falcon).
  3. AI in the augmentation (e.g., DARPA AIxCC) of security for OSS.
  4. Applied security of open source inputs/outputs in AI.
Readers Also Like:  CISA Aims For More Robust Open Source Software Security for ... - TechRepublic

JFrog, one SOSS Summit vendor participant, issued a product update this week that addresses the first item on OpenSSF’s list of secure AI concerns: static application security testing and applied security policies for AI models as well as the open source packages that accompany them.

Yoav Landman, CTO, JFrog Yoav Landman

“When you think about machine learning development, there is the model itself … but it’s never standalone,” said Yoav Landman, co-founder and CTO of JFrog, in an interview this week. “It’s a collection of binaries, a collection of artifacts that you want to manage in a single place … and we are giving [customers] a single source of truth for managing all this data.”

The new ML Model Management feature for JFrog’s Software Supply Chain Platform can identify malicious machine learning models along with malicious software packages that may be bundled alongside them. Other DevSecOps vendors such as GitLab and startup Iterative.ai have also brought ML model management, or ModelOps, into the enterprise governance fold with centralized tools to manage organizations’ ML models. But JFrog’s focus on securing AI/ML models — sometimes referred to as MLSecOps — is unique among DevOps vendors so far, said Katie Norton, an analyst at IDC.

“Although I believe they will not be far behind, I haven’t heard anything like this yet from any of the other main DevOps players,” she said. “What JFrog is announcing and [its] long term vision will certainly bring to the table one solution to the ‘need for a comprehensive secure software workbench'” called for by OpenSSF.

MLSecOps addresses growing AI threats

Recent IDC market research also reflects increasing concerns over securing AI. Its May 2023 generative AI survey of 200 developers in the U.S. found that most respondents are “somewhat,” “fairly” or “completely” confident in the security of the code generated by AI coding tools. But when asked about how often they find vulnerabilities in that code, some 40% answered “often” (31.8%) or “always” (10.3%).

Readers Also Like:  Commvault appoints former DOD CNO analyst Janas as field chief ... - ROI-NJ.com

“It is important that [DevSecOps and MLOps] processes converge over time, especially as AI becomes more integrated into commercial applications. Before the generative AI boom, … the majority of work being done by data scientists on training and developing models was not making its way directly into users’ hands like it is today,” Norton said. “Similar to the path we have seen software development follow, we will see the same with ML models. … Ideally, software development and model development should come together and flow through as much of the same tooling as possible.”

Ideally, software development and model development should come together and flow through as much of the same tooling as possible.
Katie NortonAnalyst, IDC

Otherwise, attackers could inject malicious code into AI models and their associated software components, gain access through them to data scientists’ Jupyter notebooks and other corporate resources, and wreak havoc, Landman warned.

“We actually found in Hugging Face that there are already malicious models,” he said. Hugging Face has become increasingly popular among customers as an easy way to download and retrain AI models, according to Landman.

In response, JFrog’s new product can host Hugging Face models in a separate proxy controlled by IT that can be scanned and governed with security and compliance policies alongside other artifacts in the DevSecOps pipeline. Another update to JFrog’s DevSecOps platform will let organizations block open source libraries and packages according to criteria such as the number of developers that support them and the age of the project before they’re admitted into CI/CD pipelines.

Finally, a new feature called release lifecycle management based on signed evidence adds digital signatures to app packages JFrog calls release bundles. These can be used to allow or deny these components on corporate infrastructure and verify that they haven’t been tampered with.

Readers Also Like:  Cert-In issues new guidelines for government bodies, mandates appointment of CISO - The Economic Times

Norton said IDC’s research indicates secure AI is poised to become a top enterprise IT concern as the usage of AI continues to grow. IDC’s July 2023 worldwide “Future Enterprise Spending and Resiliency Survey” found that the top barriers for using generative AI in organizations are concerns over security, privacy and trustworthiness.

“All of the operational items like developing models, quality data, finding a partner, etc. ranked lower,” Norton said. “Organizations recognize that security is something they need to be aware of when it comes to AI. But most have no clue where to even begin.”

Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.