security

Leading AI firms volunteer security commitments to Biden administration – SC Media


Security was a critical component of the “voluntary commitments” around artificial intelligence the Biden administration said it obtained from seven leading AI companies that met with the president at the White House on Friday.

In a fact sheet, the Biden administration said it plans to develop an executive order and pursue bipartisan legislation to help the United States take the lead in AI innovation, including red teaming, deploying watermarks, and monitoring for insider threats.

The seven AI companies that met with the administration included representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

Under the commitments the major AI players made, the companies agreed that AI firms have a duty to build systems that put security first. That means safeguarding their AI models against cyber and insider threats and sharing best practices and standards to prevent misuse, reduce risks to society and protect national security.

The AI companies also recognized that it’s important for people to understand when audio or visual content becomes AI-generated. To advance this goal, they agreed to develop “watermarking systems” for audio or visual content created by any of their publicly available systems. They also agreed to develop APIs to determine if a particular piece of content was created with their system.

These “watermarks” are considered important because although invisible to the human eye, they let computers detect that the text more than likely comes from an AI system. So if they are embedded in large language models (LLMS), industry experts believe they could help defenders stop attacks.

How should the U.S. regulate AI?

Over the past several months, policy experts have been critical of the U.S. response on AI. For example, a Harvard Business Review article in May said that as the EU continues to pass substantial new internet legislation, the U.S. Congress “dithers, leaving the FTC and other federal agencies largely without the tools or resources to compete with their European counterparts.”

Readers Also Like:  China Bans iPhones for Some Workers, Citing Security Risk - ExtremeTech

Friday’s announcement is a response to some of the criticism that the Biden administration has not led quickly enough on AI.

Overall, from politicians to industry experts and think-tank analysts, most were pleased to learn of the commitments, but said this was only the beginning of a long process that will more than likely lead to some form of regulation.

“I’m glad to see the administration taking steps to address the security and trust of AI systems, but this is just the beginning,” said Sen. Mark Warner, D-Va., who chairs the Senate Select Committee on Intelligence. “While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse. These commitments are a step in the right direction, but, as I have said before, we need more than industry commitments. We also need some degree of regulation.”

Michael Daniel, president and CEO of the non-profit Cyber Threat Alliance, said this voluntary agreement was the right place to start. Daniel added that because AI technology has been developing rapidly, the U.S. government needed to respond now.

“But it ‘s much better to respond with this kind of voluntary agreement that’s easy to update as we learn more than immediately rush to regulations that are harder to change,” said Daniel.

Daniel said the industry will have to grapple with the following three issues around AI security:

  • Impact on offense and defense: Malicious actors and defenders will use AI in their activities. However, whether AI will favor the attacker or defender is still up for debate, and it might end up being a wash. For example, while generative AI will let malicious actors write better phishing emails, defenders can also use AI to help detect phishing emails. Daniel said we’ll need more analysis to determine whether AI will tilt the balance towards attackers or defenders. 
  • Security of AI itself: Because AI tools are still relatively new, we don’t know the most effective practices to protect them from disruption or manipulation. Data poisoning or algorithmic tampering are hard to identify: has the AI been corrupted or is it just hallucinating? Right now, there’s a lot we still don’t know about how to secure AI systems themselves. However, there are steps we can take when it looks more like traditional cybersecurity. For example, accounts with administrative privileges for the AI systems should use multi-factor authentication and be limited in the activities they can perform.
  • Impact on the broader ecosystem: As generative AI comes into widespread use, some organizations are hardening their APIs to prevent data scraping. However, these actions reduce the data available for any purpose, including, estimating how widespread a vulnerability is in the ecosystem.
Readers Also Like:  Easing supply chain fuels switch and router growth, says IDC - FutureIoT

“Ultimately, we will need to address all three of these security issues,” said Daniel  “While it’s still not clear what kind of testing will be involved, we can imagine testing that would be typical for any network application: does the company monitor and log code updates?” 

Mike Britton, chief information security officer at Abnormal Security, said he believes that it’s still an open question whether the federal government will need to regulate AI.

“Some will say voluntary systems have been proven, such as in the ad-tech space, but others argue that regulations such as GDPR were necessary because ad-tech didn’t do a good enough job of policing itself,” said Britton. “The most significant regulation will be around ethics, transparency and assurances in how the AI operates, and having some mechanism that still requires a human component. Any good AI solution should also enable a human to make the final decision when it comes to executing — and potentially undoing — any actions taken by AI.”

Cybersecurity pros have also been concerned about whether it’s possible to make it easier for industry professionals to use it for defensive purposes, but harder for threat actors to leverage AI for malicious purposes.

“In a word: ‘no,’” said Mike Parkin, senior technical engineer at Vulcan Cyber. “Cybersecurity professionals will generally bind themselves to the rules and commit to doing their job legally and ethically. Malicious actors put themselves under no such constraint. While they may have a challenge accessing some of the larger commercial engines that follow the guidelines, there’s nothing to keep them from investing in their own or from hostile nation-states to create purpose-built engines for the task.”

Readers Also Like:  UK Government Names Quantum Technologies a Strategic Priority - Morgan Lewis

Damir Brescic, chief information security officer at Inversion6, added that the commitments highlight the importance of data privacy and security. 

“As a cybersecurity expert, I appreciate the focus on safeguarding personal and sensitive data in AI systems,” said Brescic. “Developers and organizations are urged to implement robust data protections measures, including encryption and access controls to prevent unauthorized access or misuse of the data. 

“However, the guidelines should have done more to emphasize the need for ongoing monitoring and vulnerability assessments to identify and mitigate potential security risks associated with AI systems,” he continued. “More work here is clearly going to be needed. I wouldn’t be surprised if a net new AI certification process wouldn’t evolve out of this initiative.”



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.