Since OpenAI released its ChatGPT large language model chatbot in November 2022, tech and other industries have raced to adopt generative artificial intelligence (A.I.) to improve and streamline internal operations, create new products for customers, or simply test the technology’s capabilities.
While users continue experimenting with generative A.I., others are asking about the ethical and legal implications of this type of technology. Their questions include:
- Is A.I. a national security threat?
- How will it change the role of IT and cybersecurity in the future?
- What guardrails can be applied?
- How can cybersecurity specialists best defend against attackers also using generative A.I. tools?
In the last month, the Biden administration has stepped into this field of uncertainty with a new executive order that offers guidelines for how A.I. tools such as ChatGPT and Google’s Bard should be used.
“Responsible A.I. use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure,” according to the executive order published Oct. 30. “At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
The executive order seeks to put secure limits on the expansion of A.I. technologies while also encouraging development and information sharing with federal agencies and regulators. A White House fact sheet noted that the order would take steps to:
- Require A.I. developers to share their safety test results and other critical information with federal government agencies.
- Develop standards, tools and tests to help ensure that A.I. systems are safe, secure and trustworthy.
- Create ways to protect citizens from A.I.-enabled fraud and deception by establishing standards and best practices for detecting A.I.-generated content and authenticating official content.
- Establish an advanced cybersecurity program to develop A.I. tools to find and fix vulnerabilities in critical software.
While the specifics are now under development, the Biden administration’s executive order will begin making companies and the technology industry rethink A.I. developments, experts noted. At the same time, tech and security professionals have fresh avenues to carve out career opportunities and skill sets to take advantage of a changing landscape.
“Anytime the president of the United States issues an executive order, government organizations and private industry will respond. This executive order signals a prioritization of artificial intelligence by the executive branch, which will most certainly translate into new programs and employment opportunities for those with relevant expertise,” Darren Guccione, CEO and co-founder at Keeper Security, recently told Dice.
“A.I. has already had a significant impact on cybersecurity, for cyber defenders, who are finding new applications for cybersecurity solutions as well as cyber criminals who can harness the power of A.I. to create more believable phishing attacks, develop malware and increase the number of attacks they launch,” Guccione added.
How A.I. Will Change Cybersecurity and Tech Careers
Since the start of his administration, President Joe Biden has issued several executive orders designed to influence the development of new information technology and cybersecurity. These orders, including the latest on A.I., also have the potential to change how tech pros approach their jobs.
When looking at the broad impacts of generative A.I., Piyush Pandey, CEO of security firm Pathlock, sees the technology already interacting with personal, customer and financial data. This means the roles of data privacy and data security managers will need to change and expand, especially regarding how specific data sets are leveraged as part of learning models.
Additional changes to the cybersecurity field are also coming, including greater automation involving what are now manual tasks for security teams.
“From intelligent response automation to behavioral analysis and prioritization of vulnerability remediation, A.I. is already adding value within the cybersecurity field,” Pandey told Dice. “As A.I. automates more tasks in cybersecurity, the role of cybersecurity professionals will evolve, as opposed to becoming a commodity. Talented cybersecurity pros with a growth mindset will become increasingly valuable as they provide the practical insights to guide A.I.’s deployment internally.”
As the effects of the A.I. executive order become clearer, Marcus Fowler, CEO of Darktrace Federal, sees a greater need for tech professionals who can work on red team exercises, where engineers play the role of “attacker” to find weaknesses in networks.
“In the case of A.I. systems, that means testing for security problems, user failures and other unintended questions. In cybersecurity, red teaming is incredibly helpful but not a cure-all solution—there is a whole chain of steps that companies must take to help secure their systems,” Fowler told Dice. “Many systems and safeguards need to be put in place before red teaming can be useful. Red-teaming is also not a one-and-done deal. It needs to be a continuous process to test whether security and safety measures are keeping pace with evolutions in digital environments and A.I. models.”
Tech Career Opportunities in Government
While much of the conversation around the executive order centers on what it means for private enterprises, there is also an expanded role that the federal government will now play in regulating and even helping to develop these A.I. tools and platforms.
“The executive order could potentially create A.I. jobs in various agencies impacted by this order and certainly in regulatory agencies,” John Bambenek, principal threat hunter at security firm Netenrich, told Dice. “In the private sector, the jobs are already there as there is a gold rush to try to claim market share. What we’ve seen is a few organizations creating A.I. safety teams, but they usually tend to have minimal impact, if they exist for the long term at all.”
With the executive order calling for private businesses to share information about A.I. with agencies and regulators, Guccione sees a larger role for tech professionals in the federal government who understand the technology and how it is developed.
“Developers of the most powerful A.I. systems will be required to share their safety test results and other critical information with the U.S. government, and extensive red-team testing will be done to help ensure that A.I. systems are safe, secure and trustworthy before they become available to the public,” Guccione added. “Additionally, standardized tools and tests will be developed and implemented to provide governance over new and existing A.I. systems. Given the range of recommendations and actions included, organizations will likely feel the effects of this executive order across all sectors, regardless of where they are in their A.I. journey or what type of A.I. system is being used.”
Taking Steps to Create a Secure Culture
While it’s likely to take months or even years to see results from the executive order, experts noted that this action by the White House is likely to lead to additional attention on cybersecurity.
This includes added attention to how secure these A.I. systems are, and how attackers can use the technology for themselves.
The increasing prevalence of deep fakes, mass email phishing campaigns and advanced social engineering techniques powered by A.I. should make companies invest more in tech professionals who understand these threats and how to counter them, said Craig Jones, vice president of security operations at Ontinue.
“A.I. can also be utilized to counter these threats. For example, A.I.-based security systems can detect and block phishing emails or identify deep fake content,” Jones told Dice. “While technology can play a significant role in mitigating social engineering risks, relying solely on technology is not a foolproof solution. A balanced approach that combines technology, regular awareness training, and the development of a strong security culture is essential to reduce the impact of social engineering attacks.”
Darktrace’s Fowler also noted that the executive order is likely to bring added attention to the security pitfalls of A.I. In turn, better development is required to address these issues.
“You cannot achieve A.I. safety without cybersecurity: it is a prerequisite for safe and trusted general-purpose A.I. That means taking action on data security, control and trust,” Fowler noted. “It’s promising to see some specific actions in the executive order that start to address these challenges. But as the government moves forward with regulations for A.I. safety, it’s also important to ensure that it is enabling organizations to build and use A.I. to remain innovative and competitive globally and stay ahead of the bad actors.”