security

AI and China are 'defining challenges of our time,' CISA director says – Nextgov


The head of the Cybersecurity and Infrastructure Security Agency warned on Wednesday about the security risks posed by generative artificial intelligence technologies and an increasingly bellicose China, calling them “the two epoch defining challenges of our time.”

During an event hosted by Axios, CISA Director Jen Easterly outlined her concerns about Beijing’s aggressive cyber posture and the rise of largely unregulated generative AI tools and called for tech firms and critical infrastructure operators to prioritize enhanced security practices. 

Easterly cited, in part, the intelligence community’s 2023 annual threat assessment — which was publicly released in March — and noted that it outlined how “in the event of a conflict, like an invasion or a blockade of the Taiwan Strait, we will almost probably see aggressive cyber operations here in the U.S.” She said that these cyberattacks would likely be designed “to delay military deployment and to induce societal panic” and would rely on digital intrusions “capable of disrupting transportation, oil and pipelines.”

CISA, in collaboration with its Five Eyes intelligence-sharing partners, published a joint cybersecurity advisory last week that shared technical details about a Beijing-linked cyber threat actor, known as Volt Typhoon, that is targeting the networks of critical infrastructure operators. Easterly said the advisory was “a real wake up call for our concerns about why we need to increase the security and resilience of our critical infrastructure.” 

“These are the types of threats that we need to be prepared to defend against, and that’s why continuing to resource our budget is so incredibly important,” she added, citing the White House’s proposed 2024 fiscal year budget that would allocate $3.1 billion to CISA — an increase of $145 million to the agency’s current budget.

Readers Also Like:  Security Automation: Types, Benefits & 5 Best Practices - CrowdStrike

Easterly — who has been pushing in recent months for tech firms and software manufacturers to prioritize security when developing new products — also reiterated her call for companies to take a more active role in securing their services from growing cyber threats, but reframed it to address growing concerns about the unchecked rise of generative AI technologies. 

She pointed to a joint statement released by the Center for AI Safety on Tuesday, in which more than 350 people — including OpenAI CEO Sam Altman and other tech executives — said that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

“When you have 350 experts coming out and saying there’s a potential for extinction of humanity, I think there’s a lot to worry about there,” Easterly said, adding that “we need to rapidly get our arms around this” when it comes to regulating AI tools. 

Like other tech and software manufacturers, Easterly said AI was “yet another flavor of technology that has to be built [with] security up-front, safety up-front.” 

“I see the world through three decades of intelligence, counterterrorism and cybersecurity,” she added. “And at the end of the day, these capabilities will do amazing things. They’ll make our lives easier and better. They’ll make lives easier and better for our adversaries, who will flood the space with disinformation, who will be able to create cyberattacks and all kinds of weapons.” 

While some lawmakers and tech executives like Altman have already called for greater oversight and regulations around the use and development of AI technologies, Easterly said that the companies themselves can already take steps to prevent the extinction-type risk outlined in yesterday’s joint statement by working to bake security into their services. 

Readers Also Like:  GSA, DOD Partner to Make Sustainable Technologies Available to ... - Executive Gov

“I would ask these 350 people and the makers of AI, while we’re trying to put a regulatory framework in place, think about self-regulation,” Easterly said. “Think about what you can do to slow this down, so we don’t cause an extinction event for humanity.”





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.