security

UCSB to lead NSF-funded research institute for next-level AI … – The UCSB Current


The collaborators will begin by conducting research along four main thrusts: learning and reasoning with domain knowledge; human-agent interaction; multi-agent collaboration and strategic gaming and tactical planning. These research areas act as the foundation of knowledge that can grow to handle large data sets, while extracting meaning and promoting inference and reasoning based on the best techniques available.

 “Human and AI agents process information in different ways: How they recognize threats, deal with underspecified systems, learn unsecure behaviors from history and predict future consequences of actions,” said Singh, whose research involves AI/human interactions. Merging AI with human expertise is a best-of-both-worlds security scenario, he said. “Building a joint human-AI system that complements each other with capabilities, such as presenting a human expert with risk-reward options derived from an AI-learned model, are some of the ways in which the institute will lead the frontier of future research in AI-cybersecurity.”

 Another novel approach the institute will take toward cybersecurity stems from the realization that security systems can be viewed as a stage where multiple agents interact, each with their own motivations, goals and abilities, Hespanha added. “Designing security systems must involve reasoning about how the actions of one agent will affect the behavior of another agent,” he said. “This type of reasoning is needed to make sure that whatever protection mechanisms we deploy to protect our system against one type of attack do not unintentionally create a completely new vulnerability.”

 Importantly, the foundational AI research stack involves a layer of defense, which goes beyond dealing with anticipated cyberattacks, into understanding the context of the attack and the attackers in a rapidly evolving, high-volume landscape of information.

 The AI research informs the cybersecurity element, in which agents are developed for the assessment, detection and attribution of attacks.

The rubber meets the road in the final security thrust, which focuses on the analysis and containment of cyberattacks, as well as the planning and adaptation of response and recovery. This includes the knowledge gained from the activities of the assessment, detection and attribution agents to predict and contain attacks, to fix and restore operations where possible, and to learn hacking strategies that could be used for future attacks and rare methods of cyber-intrusion.

 Vigna likens the overall strategy to the defense used in soccer, in which the goalkeeper must observe the strategies and tactics of the opposite team and decide where to concentrate defense efforts.

 “You cannot cover everything 100% all of the time, but having these hints allows you to focus or change your security posture,” he said. The use of AI allows the defenders to reason at large scales, predict how the attack might unfold and respond rapidly.

 In addition to developing next-generation cybersecurity, the ACTION Institute will implement programs to engage K-12 students as well as undergraduate, graduate and postdoctoral students for education and workforce development, with an emphasis on outreach to underrepresented communities.

 “We have an incredible need for people who know how to use security and know how to interact with and program AI,” Vigna said. Just as crucially, the institute will create a network of industry collaborators who can apply ACTION’s methods and research results to real-world settings.

 And the results might even go beyond cybersecurity, with the methods and agents developed for this project able to inform other areas with large and rapidly evolving datasets, such as medical diagnostics and epidemiology. “That would be one of our metrics for success,” Vigna said.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.