Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
DeepSeek and its R1 model aren’t wasting any time rewriting the rules of cybersecurity AI in real-time, with everyone from startups to enterprise providers piloting integrations to their new model this month.
R1 was developed in China and is based on pure reinforcement learning (RL) without supervised fine-tuning. It is also open source, making it immediately attractive to nearly every cybersecurity startup that is all-in on open-source architecture, development and deployment.
DeepSeek’s $6.5 million investment in the model is delivering performance that matches OpenAI’s o1-1217 in reasoning benchmarks while running on lower-tier Nvidia H800 GPUs. DeepSeek’s pricing sets a new standard with significantly lower costs per million tokens compared to OpenAI’s models. The deep seek-reasoner model charges $2.19 per million output tokens, while OpenAI’s o1 model charges $60 for the same. That price difference and its open-source architecture have gotten the attention of CIOs, CISOs, cybersecurity startups and enterprise software providers alike.
(Interestingly, OpenAI claims DeepSeek used its models to train R1 and other models, going so far as to say the company exfiltrated data through multiple queries.)
An AI breakthrough with hidden risks that will keep emerging
Central to the issue of the models’ security and trustworthiness is whether censorship and covert bias are incorporated into the model’s core, warned Chris Krebs, inaugural director of the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and, most recently, chief public policy officer at SentinelOne.
“Censorship of content critical of the Chinese Communist Party (CCP) may be ‘baked-in’ to the model, and therefore a design feature to contend with that may throw off objective results,” he said. “This ‘political lobotomization’ of Chinese AI models may support…the development and global proliferation of U.S.-based open source AI models.”
He pointed out that, as the argument goes, democratizing access to U.S. products should increase American soft power abroad and undercut the diffusion of Chinese censorship globally. “R1’s low cost and simple compute fundamentals call into question the efficacy of the U.S. strategy to deprive Chinese companies of access to cutting-edge western tech, including GPUs,” he said. “In a way, they’re really doing ‘more with less.’”
Merritt Baer, CISO at Reco and advisor to multiple security startups, told VentureBeat that, “in fact, training [DeepSeek-R1] on broader internet data controlled by internet sources in the west (or perhaps better described as lacking Chinese controls and firewalls), might be one antidote to some of the concerns. I’m less worried about the obvious stuff, like censoring any criticism of President Xi, and more concerned about the harder-to-define political and social engineering that went into the model. Even the fact that the model’s creators are part of a system of Chinese influence campaigns is a troubling factor — but not the only factor we should consider when we select a model.”
With DeepSeek training the model with Nvidia H800 GPUs that were approved for sale in China but lack the power of the more advanced H100 and A100 processors, DeepSeek is further democratizing its model to any organization that can afford the hardware to run it. Estimates and bills of materials explaining how to build a system for $6,000 capable of running R1 are proliferating across social media.
R1 and follow-on models will be built to circumvent U.S. technology sanctions, a point Krebs sees as a direct challenge to the U.S. AI strategy.
Enkrypt AI’s DeepSeek-R1 Red Teaming Report finds that the model is vulnerable to generating “harmful, toxic, biased, CBRN and insecure code output.” The red team continues that: “While it may be suitable for narrowly scoped applications, the model shows considerable vulnerabilities in operational and security risk areas, as detailed in our methodology. We strongly recommend implementing mitigations if this model is to be used.”
Enkrypt AI’s red team also found that Deepseek-R1 is three times more biased than Claude 3 Opus, four times more vulnerable to generating insecure code than Open AI’s o1, and four times more toxic than GPT-4o. The red team also found that the model is eleven times more likely to create harmful output than Open AI’s o1.
Know the privacy and security risks before sharing your data
DeepSeek’s mobile apps now dominate global downloads, and the web version is seeing record traffic, with all the personal data shared on both platforms captured on servers in China. Enterprises are considering running the model on isolated servers to reduce the threat. VentureBeat has learned about pilots running on commoditized hardware across organizations in the U.S.
Any data shared on mobile and web apps is accessible by Chinese intelligence agencies.
China’s National Intelligence Law states that companies must “support, assist and cooperate” with state intelligence agencies. The practice is so pervasive and such a threat to U.S. firms and citizens that the Department of Homeland Security has published a Data Security Business Advisory. Due to these risks, the U.S. Navy issued a directive banning DeepSeek-R1 from any work-related systems, tasks or projects.
Organizations who are quick to pilot the new model are going all-in on open source and isolating test systems from their internal network and the internet. The goal is to run benchmarks for specific use cases while ensuring all data remains private. Platforms like Perplexity and Hyperbolic Labs allow enterprises to securely deploy R1 in U.S. or European data centers, keeping sensitive information out of reach of Chinese regulations. Please see an excellent summary of this aspect of the model.
Itamar Golan, CEO of startup Prompt Security and a core member of OWASP’s Top 10 for large language models (LLMs), argues that data privacy risks extend beyond just DeepSeek. “Organizations should not have their sensitive data fed into OpenAI or other U.S.-based model providers either,” he noted. “If data flow to China is a significant national security concern, the U.S. government may want to intervene through strategic initiatives such as subsidizing domestic AI providers to maintain competitive pricing and market balance.”
Recognizing R1’s security flaws, Prompt added support to inspect traffic generated by DeepSeek-R1 queries in a matter of days after the model was introduced.
During a probe of DeepSeek’s public infrastructure, cloud security provider Wiz’s research team discovered a ClickHouse database open on the internet with more than a million lines of logs with chat histories, secret keys and backend details. There was no authentication enabled on the database, allowing for quick potential privilege escalation.
Wiz’s Research’s discovery underscores the danger of rapidly adopting AI services that aren’t built on hardened security frameworks at scale. Wiz responsibly disclosed the breach, prompting DeepSeek to lock down the database immediately. DeepSeek’s initial oversight emphasizes three core lessons for any AI provider to keep in mind when introducing a new model.
First, perform red teaming and thoroughly test AI infrastructure security before ever even launching a model. Second, enforce least privileged access and adopt a zero-trust mindset, assume your infrastructure has already been breached and trust no multidomain connections across systems or cloud platforms. Third, have security teams and AI engineers collaborate and own how the models safeguard sensitive data.
DeepSeek creates a security paradox
Krebs cautioned that the model’s real danger isn’t just where it was made but how it was made. DeepSeek-R1 is the byproduct of the Chinese technology industry, where private sector and national intelligence objectives are inseparable. The concept of firewalling the model or running it locally as a safeguard is an illusion because, as Krebs explains, the bias and filtering mechanisms are already “baked-in” at a foundational level.
Cybersecurity and national security leaders agree that DeepSeek-R1 is the first of many models with exceptional performance and low cost that we’ll see from China and other nation-states that enforce control of all data collected.
Bottom line: Where open source has long been viewed as a democratizing force in software, the paradox this model creates shows how easily a nation-state can weaponize open source at will if they choose to.
READ SOURCE