Terrorists and other bad actors will have the ability to leverage increasingly sophisticated artificial intelligence (AI) to develop biological weapons in the near future, according to the CEO of a tech security company. Dario Amodei, CEO of Anthropic, raised concerns about the grave risk of AI empowering individuals to create devastating pathogens during a testimony to a Senate Judiciary subcommittee in the US. Amodei explained that AI tools could already assist in certain steps of bioweapons production, although they currently have limitations. However, he warned that future AI systems would likely have the capability to fill in all the missing components, enabling more actors to carry out large-scale biological attacks.
Amodei emphasized that this poses a serious threat to national security, demanding a systemic policy response. He recommended implementing limits on the export of equipment that may aid “bad actors” and establishing a strict testing regime for powerful new AI models. Additionally, Amodei called for further testing of systems used to audit AI tools. Detecting all the possible malicious behaviors of an AI system is currently challenging without first broadly deploying it to users, which heightens the associated risks.
The concerns raised by Anthropic align with UK Prime Minister Rishi Sunak’s acknowledgment of the risks posed by runaway AI. Sunak, along with the CEOs of Google DeepMind and OpenAI, met with Amodei in May to discuss safety measures and the potential for international collaboration on AI regulation. They emphasized the need for AI regulation to keep pace with the rapid advancements in technology to mitigate risks, including disinformation, national security, and existential threats.
The UK-based IT security company Conjecture also supported the calls for action. Andrea Miotti, Head of Strategy and Governance, agreed that powerful autonomous AIs smarter than humans should be banned. He advocated for a global moratorium on AI proliferation to address the risks associated with the development of bioweapons.
Addressing the potential dangers of AI and its implications for national security is becoming increasingly urgent. The international community must work together to establish robust regulations and safeguards to prevent AI from being misused for harmful purposes.