security

Tenable Developing Four Generative AI Tools for Security Research – TechDecisions


Tenable is developing four new tools designed to create efficiencies in processes such as reverse code engineering, code debugging, web app security and visibility into cloud-based tools.

The Columbia, Md.-based vulnerability management software provider’s research team is conducting ongoing experimentation with generative AI applications such as ChatGPT, and researchers have made them publicly available to the security research community through a GitHub repository.

G-3PO for reverse engineering

One such tool is called G-3PO, a reverse engineering tool that adds another layer of automation to the reverse engineering workflow of Ghidra, an extensible software reverse engineering framework developed by the NSA and released to the pubic in 2019.

According to Tenable, Ghidra “automates several reverse engineering tasks, including disassembling a binary into its assembly language listing, reconstructing its control flow graph and decompiling that assembly listing into something resembling source code in the C programming language.”

However, that is typically where Ghidra’s translation of machine-readable binary code into something humans can understand ends, resulting in manual work for interpretation and annotation.

Human engineers then have to analyze the decompiled code by repeatedly comparing it to the original assembly listing to ensures no errors from the decompilation process are overlooked. As the engineer examines the code, they add explanatory comments and assign descriptive names to variables and functions to improve readability.

According to Tenable, this is where G-3PO comes in by adding another layer of automation to the reverse engineering workflow by submitting a function’s decompiled C code to a language model (it currently supports models from both OpenAI and Anthropic) and requests an explanation of what the function does along with suggestions for descriptive variable names, according to Tenable.

Readers Also Like:  Local Governments Leveraging Tech to Increase Efficiency ... - CivicPlus

G-3PO can then automatically add these names and comments to the Ghidra decompilation listing, the company says.

The result is the reverse engineer’s fast, high-level understanding of the code’s functionality without having to decipher every line, giving humans a “bird’s eye view” of the binary in question and allowing them to direct attention to regions of code that most concern them. There, they can manually analyze those binaries.

AI Assistant for debugging

Tenable also developed an AI assistant for the GNU Debugger to simplify the debugging process. The company says its tool was implemented as a plugin for two popular GDB extension frameworks; GEF and Pwndbg.

This tool supports language models from Anthropic and OpenAI, allowing the tool to analyze debugging information and answer questions about runtime state or assembly code, Tenable says. The AI assistant debugger reduces the complexities of the debugging process by providing an interactive tool for exploring the debugging context.

“It receives information on registers, stack values, backtrace, assembly and decompiled code if using the Ghidra extension in Pwndbg, providing as much of the relevant context as possible to accompany the user’s queries,” Tenable says in published research. “The user can pose whatever question they like to the model — from general queries like ‘What’s going on here?’ or ‘Does this function look vulnerable?’ to more specific questions like ‘Are there any circumstances that will lead to this function calling free() twice on the same pointer?’ The user can then ask the model follow-up questions for the sake of clarification or correction.”

Readers Also Like:  AEP REPORTS THIRD-QUARTER 2023 EARNINGS, NARROWS ... - PR Newswire

BurpGPT for web app vulnerability testing

Tenable also built BurpGPT, an AI assistant extension of its Burp Suite of web application vulnerability testing tool. According to the company, the tool works by leveraging Burp’s proxy feature to intercept HTTP traffic and prompts the OpenAI API to analyze the traffic to identify risks and potential fixes to any identified issues.

The company says BurpGPT can be used to discover injection points, misconfigurations and more. The tool–leveraging GPT 3.5 and GPT 4–has successfully identified cross-site scripting vulnerabilities and misconfigured HTTP headers without requiring any additional fine-tuning.

As is the goal with other AI assistants, this is designed to reduce manual work, specifically the testing and automating of security testing for web application developers. Tenable researchers also get another tool to help identify novel new exploitation techniques that can be implemented into Tenable products.

EscalateGPT for IAM security

Tenable is also developing EscalateGPT, an AI-powered tool designed to help identify identity and access management policy issues. Specifically, Tenable calls it a Python tool designed to identify privilege-escalation opportunities in Amazon Web Services IAM.

According to Tenable, EscalateGPT can be used to retrieve all IAM policies associated with users or groups and will then prompt the OpenAI API, asking it to identify potential escalation opportunities and any relevant mitigations. The tool then returns results in a JSON format that includes the path, the Amazon Resource Name (ARN) of the policy that could be exploited for privilege escalation and the recommended mitigation strategies to address the identified vulnerabilities.

Readers Also Like:  Scammers posed as tech support to hack US agencies, NSA says - KBTX

The company says testing against real-world AWS environments found that GPT-4 managed to identify complex scenarios of privilege escalation based on non-trivial policies through multi-IAM accounts. With GPT3.5-turbo, Tenable found that only half of the privilege escalation cases tested for were identified.

Generative AI and cybersecurity

Tenable notes that malicious actors are already using generative AI and large language models (LLMs) to conduct attacks, but defenders can also leverage these tools to help with a variety of security tasks, such as log parsing, anomaly detection, triaging, incident response, and more.

Early use cases of ChatGPT and generative AI have already included programming and code analysis. Coupled with threat detection and intelligence from trained AI models, defenders will have many other use cases for this emerging technology, the company says.

“While we’re only at the start of our journey in implementing AI into tools for security research, it’s clear the unique capabilities these LLMs provide will continue to have profound impacts for both attackers and defenders,” Tenable says in its research report.

 Read Tenable’s research for examples and use cases.

 





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.