The Biden-Harris administration recently unveiled a new national cybersecurity strategy aimed at protecting America’s digital infrastructure. It comes as high-profile attacks continue to target both government agencies and private companies.
Shiu-Kai Chin is a professor of electrical engineering and computer science at Syracuse University. He is affiliated with the university’s Institute for Security Policy and Law and is an expert in computer security.
Here, Chin helps break down the new strategy and looks at the roles government and corporations will play in securing critical infrastructure.
Just how big of a problem is cybersecurity, and why is it important to tackle it at the federal level?
Safety and security in cyberspace is a global wicked problem. That is, a problem that cannot be solved once and for all because of the myriad of stakeholders with differing views of what is adequate safety and security. Each stakeholder views the problem differently. The root causes evolve and are interconnected. This is very similar to other wicked problems such as climate change.
The federal government plays an important role in convening stakeholders nationally and internationally to gain consensus and international agreements on standards and acceptable behavior and minimum safety levels. Think about air travel and commerce. Think about arms control.
Safety and security in cyberspace is a global wicked problem. That is, a problem that cannot be solved once and for all because of the myriad of stakeholders with differing views of what is adequate safety and security.
Shiu-Kai Chin
What do you see as some of the key components of the administration’s strategy?
Important elements of the strategy include coordinating regulations, procurement, economic incentives, and R&D with the specific goal of making cyber-systems and cyberspace safe and secure as a realm of operations for people, business, and governments. For example, tech companies such as software and semiconductor manufacturers often focus on minimizing “time to dollars.” This type of thinking rewards companies who rush products to market with new and exciting features without worrying about cybersecurity. This effectively transfers risk to users while setting up de-facto standards for new products without much thought to security. “Leveling the field” means finding ways to reward companies and innovators who think about security from the start so that products with cybersecurity built-in from the start (much like safety is built-into to all our electrical appliances with UL certification) become the norm not the exception.
Do you feel the current strategy will have a measurable impact on future cyberattacks?
Yes, but it will take time. We didn’t arrive in this place a minute ago. Our problems started when, for understandable reasons, personal computers and the chips that powered them had all the security we used to have on mainframes stripped out of them (personal means only the owner has access, right?) and we networked PCs with the Internet. This invalidated an important design assumption in the development of PCs.
The emphasis on “zero trust,” i.e., all access and actions must be authenticated and authorized by enforcing appropriate policies, has “security by design” as a goal, as opposed to “bolt-on security” after a product is built with inherent security flaws that cannot be fixed. There are a lot of so-called legacy systems with poor security in operation. Things will get better to the extent that these systems are phased out of critical infrastructure and replaced by systems where security is part of the conceptual design of the system from the start.
What are some of the biggest challenges you foresee with implementing the strategy?
The emphasis on R&D leading to better authentication (identifying the source of requests and integrity of information) is good start to the problem of attribution in cyber attacks.
The harder issue is the balance of privacy and attribution. This is inherently an authorization or policy problem where the appropriate “good enough” policy is a trade-off among stakeholders. This where many difficult conversations will occur. Do we want a total surveillance state or the wild west? That’s a false dichotomy. We want something in-between where the trade-offs are made based on mission or situation. Protecting access to a biolab with pathogens that can trigger the next pandemic probably won’t value privacy as much as a public library giving internet access to people who cannot afford their own computers.
What else can/should be done to prevent attacks and mitigate damage to critical infrastructure?
Engineering exists to support society. Our profession exists in large part to provide critical infrastructure that is safe, secure, and operates with integrity and equity in mind. Our profession excels when we realize that “good enough” safety, security, integrity, and equity have no universally agreed-upon definitions for all cases, applications, and missions. It involves precisely and accurately identifying unacceptable losses to stakeholders for each mission and/or purpose. Once that is done, so-called “adult conversations” can happen where “good enough” is defined through trade-offs. Engineers, planners, folks in leadership know that it’s impossible to maximize all parameters simultaneously, e.g., you cannot simultaneously get the biggest, heaviest car with largest engine, while simultaneously maximizing fuel efficiency.
An adult conversation the US Government will have to have is the use of COTS – commercial off the shelf – products in mission critical systems and critical infrastructure. COTS products are built for the commercial market, often for home users (e.g., PCs). They are designed for benign operating environments, not military ones. Using COTS is like a SEAL team going to Best Buy and picking up someone from the Geek squad to deploy with them on a mission. The question is for any critical infrastructure system is should we prioritize cost over safety and security?