security

Warner Calls on AI Companies to Prioritize Security and Prevent … – Senator Mark Warner


WASHINGTON – U.S. Sen. Mark R. Warner (D-VA), Chairman of the Senate Select Committee on Intelligence, today urged CEOs of several artificial intelligence (AI) companies to prioritize security, combat bias, and responsibly roll out new technologies. In a series of letters, Sen. Warner expressed concerns about the potential risks posed by AI technology, and called on companies to ensure that their products and systems are secure.

In the past several years, AI technology has rapidly advanced while chatbots and other generative AI products have simultaneously widened the accessibility of AI products and services. As these technologies are rolled out broadly, open source researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses in the prominent products, including abilities to generate credible-seeming misinformation, develop malware, and craft sophisticated phishing techniques.

“[W]ith the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” Sen. Warner wrote. “Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.”

Sen. Warner highlighted several specific security risks associated with AI, including data supply chain security and data poisoning attacks. He also expressed concerns about algorithmic bias, trustworthiness, and potential misuse or malicious use of AI systems.

The letters include a series of questions for companies developing large-scale AI models to answer, aimed at ensuring that they are taking appropriate measures to address these security risks. Among the questions are inquiries about companies’ security strategies, limits on third-party access to their models that undermine the ability to evaluate model fitness, and steps taken to ensure secure and accurate data inputs and outputs. Recipients of the letter include the CEOs of OpenAI, Scale AI, Meta, Google, Apple, Stability AI, Midjourney, Anthropic, Percipient.ai, and Microsoft.

Sen. Warner, a former tech entrepreneur, has been a vocal advocate for Big Tech accountability and a stronger national posture against cyberattacks and misinformation online. He has introduced several pieces of legislation aimed at addressing these issues, including the RESTRICT Act, which would comprehensively address the ongoing threat posed by technology from foreign adversaries; the SAFE TECH Act, which would reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms; and the Honest Ads Act, which would require online political advertisements to adhere to the same disclaimer requirements as TV, radio, and print ads.

Readers Also Like:  The Rockefeller Foundation Report Identifies Steps to Strengthen ... - PR Newswire

A copy of the letters can be found here and below. 

I write today regarding the need to prioritize security in the design and development of artificial intelligence (AI) systems. As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way. While public concern about the safety and security of AI has been on the rise, I know that work on AI security is not new. However, with the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work. Beyond industry commitments, however, it is also clear that some level of regulation is necessary in this field.

I recognize the important work you and your colleagues are doing to advance AI. As a leading company in this emerging technology, I believe you have a responsibility to ensure that your technology products and systems are secure. I have long advocated for incorporating security-by-design, as we have found time and again that failing to consider security early in the product development lifecycle leads to more costly and less effective security. Instead, incorporating security upfront can reduce costs and risks. Moreover, the last five years have demonstrated that the ways in which the speed, scale, and excitement associated with new technologies have frequently obscured the shortcomings of their creators in anticipating the harmful effects of their use. AI capabilities hold enormous potential; however, we must ensure that they do not advance without appropriate safeguards and regulation. 

Readers Also Like:  Keystone School candidates offer opinions on challenges | News ... - Lock Haven Express

While it is important to apply many of the same security principles we associate with traditional computing services and devices, AI presents a new set of security concerns that are distinct from traditional software vulnerabilities. Some of the AI-specific security risks that I am concerned about include the origin, quality, and accuracy of input data (data supply chain), tampering with training data (data poisoning attacks), and inputs to models that intentionally cause them to make mistakes (adversarial examples). Each of these risks further highlighting the need for secure, quality data inputs. Broadly speaking, these techniques can effectively defeat or degrade the integrity, security, or performance of an AI system (including the potential confidentiality of its training data). As leading models are increasingly integrated into larger systems, often without fully mapping dependencies and downstream implications, the effects of adversarial attacks on AI systems are only magnified.

In addition to those risks, I also have concerns regarding bias, trustworthiness, and potential misuse or malicious use of AI systems. In the last six months, we have seen open source researchers repeatedly exploit a number of prominent, publicly-accessible generative models – crafting a range of clever (and often foreseeable) prompts to easily circumvent a system’s rules. Examples include using widely-adopted models to generate malware, craft increasingly sophisticated phishing techniques, contribute to disinformation, and provide harmful information. It is imperative that we address threats to not only digital security, but also threats to physical security and political security.

In light of this, I am interested in learning about the measures that your company is taking to ensure the security of its AI systems. I request that you provide answers to the following questions no later than May 26, 2023.

Questions: 

1.     Can you provide an overview of your company’s security approach or strategy?

2.     What limits do you enforce on third-party access to your model and how do you actively monitor for non-compliant uses?

3.     Are you participating in third party (internal or external) test & evaluation, verification & validation of your systems?

4.     What steps have you taken to ensure that you have secure and accurate data inputs and outputs? Have you provided comprehensive and accurate documentation of your training data to downstream users to allow them to evaluate whether your model is appropriate for their use?

Readers Also Like:  Palo Alto acquires Israeli cyber company Talon for $625 million - CTech

5.     Do you provide complete and accurate documentation of your model to commercial users? Which documentation standards or procedures do you rely on?

6.     What kind of input sanitization techniques do you implement to ensure that your systems are not susceptible to prompt injection techniques that pose underlying system risks?

7.     How are you monitoring and auditing your systems to detect and mitigate security breaches?

8.     Can you explain the security measures that you take to prevent unauthorized access to your systems and models?

9.     How do you protect your systems against potential breaches or cyberattacks? Do you have a plan in place to respond to a potential security incident? What is your process for alerting users that have integrated your model into downstream systems? 

10. What is your process for ensuring the privacy of sensitive or personal information you that your system uses?

11. Can you describe how your company has handled past security incidents?

12. What security standards, if any, are you adhering to? Are you using NIST’s AI Risk Management Framework?

13. Is your company participating in the development of technical standards related to AI and AI security?

14. How are you ensuring that your company continues to be knowledgeable about evolving security best practices and risks? 

15. How is your company addressing concerns about AI trustworthiness, including potential algorithmic bias and misuse or malicious use of AI?

16. Have you identified any security challenges unique to AI that you believe policymakers should address?

Thank you for your attention to these important matters and I look forward to your response. 

###



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.