Russia’s invasion of Ukraine highlights how the European Union must consider national security when regulating tech. Right now, it is not.
Imagine Russian tanks rolling across NATO’s eastern border. European regulations allow them to scoop up data on both military and civilian traffic. Cybersecurity loopholes permit them to jam defense signals and take out critical infrastructure. European armies are unable to fully exploit artificial intelligence technology on the battlefield.
Several recently passed or under consideration European tech regulations increase these security risks.
Start with cybersecurity. It’s key to preventing traffic disruptions and to protecting airports, harbors, and railroads. In order to mitigate risks, the European Commission encourages coordination of civilian and military traffic control.
Yet, the EU’s new cybersecurity regulation promotes data localization and cybersecurity certifications, both of which hold significant security risks. Ukraine managed to protect its critical infrastructure after 2022, from its banks to its government records, by moving key data outside of the country. The government worked with the private sector, including US companies Microsoft to VMWare, rather than wasting energy on ‘certifying’ these companies’ cybersecurity defenses.
A second concern stems from data sharing. The EU is considering a Data Act that may end up obliging companies to unnecessarily share their collected data with competitors. In this way, shared data could be reverse-engineered to obtain information about critical infrastructure to the detriment of national security.
Unwarranted data sharing could be used to force car companies to release data about their vehicles and map companies to give away data about traffic patterns. Cars equipped with sensors and cameras can amass impressive amounts of data. Mapping companies such as TomTom or Google have amassed deep knowledge about infrastructure deficiencies and repairs. A hostile country could thus gain access to and leverage this data on the battlefield.
Get the Latest
Get regular emails and stay informed about our work
Artificial intelligence represents a third area of concern, as it emerges as a defense enabler. China incorporates AI into its defense, considering it a key tool for building a world-class modern military. Russia is developing unmanned land-based AI-powered robots and (although in propagandistic terms) threatens to introduce them on the battlefield in Ukraine against US Abrams and German Leopard tanks.
AI enhances logistics, namely the movement of troops, equipment, ammunition, and supplies. It improves intelligence gathering and reconnaissance efforts by improving data analysis of enemy movements. AI enhances the performance of weapons by identifying enemy military targets, including drones, or operating unmanned vehicles. Reducing the human casualty risk, AI may lower the threshold for a military to take offensive action.
The problem is that the EU prioritizes AI regulation over innovation. The EU’s AI Act, under final negotiations, could ban some types of machine learning, such as social scoring and subliminal manipulation. Other AI applications will be classified as ‘high-risk,’ such as software powering critical infrastructure and law enforcement investigations, as well as student assessments and recruitment decisions. Developers of these products will be obliged to comply with strict requirements of transparency, safety, and human oversight.
Required conformity assessments promise to be cumbersome, particularly for small and medium-sized enterprises. The AI Act would cause the EU to lose further ground to the US and China. Although it excludes ‘national security, defense, and military purpose’ from its scope, the dual-use nature of AI innovation undermines this distinction.
The US and China place cybersecurity and AI industries at the center of national security. The lack of an EU-wide defense strategy obviously prevents Europe from doing the same.
Overall, the EU must strike a better balance between regulation and innovation and prioritize entrepreneurship over risk aversity. This means reducing the list of ‘high-risk’ AI usages and reducing their scope. A concrete example would be to avoid treating biometric and personal data always as ‘high-risk.’ The EU’s final internal negotiation should at least settle for the ‘sandbox option,’ a controlled environment where AI can be tested with a view to deciding which usages require regulation as high risk.
Transatlantic cooperation is key. A new Hub for EU Defense Innovation represents a good first step. It brings together European national governments and the European Commission and partners with NATO’s Defense Innovation Accelerator for the North Atlantic.
Russia’s invasion of Ukraine highlights the need to prioritize defense-related tech. The EU and the US have an interest in working together to forge tech regulations that protect and promote transatlantic defense.
Dr. Henrik Larsen is a Senior Researcher in the Swiss and Euro- Atlantic Security Team at the Center for Security Studies at the ETH Zürich with a focus on NATO and transatlantic security. Before, he served as a Political Adviser with the EU Delegation to Ukraine and was a Research Fellow at Harvard University’s Belfer Center, the Carnegie Endowment for International Peace and at Stanford University’s Center on Democracy, Development, and the Rule of Law.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.