Scientists discovered that DeepSeek Failed Every Single Security Test.

Researchers from the University of Pennsylvania, a technology company, and Cisco, found that DeepSeek’s premier R1 logic AI model is incredibly susceptible to booting.

In a published today, first , the researchers found that DeepSeek “failed to prevent a second dangerous fast” after being tested against “50 random prompts from the HarmBench dataset”, which includes” crime, propaganda, illegal activities, and general harm”.

” This contrasts starkly with other leading models, which demonstrated at least partial resistance”, the blog post reads.

Given the magnitude of chaos that DeepSeek has caused to the AI industry as a whole, it’s a particularly noteworthy development. The company claims its R1 model can trade blows with competitors including OpenAI’s state-of-the-art o1, but at a tiny fraction of the cost, sending .

However, it appears that the company has done little to protect its AI model from misuse and attacks. In other words, it wouldn’t be hard for a bad actor to turn it into a powerful disinformation machine or get it to explain how to create explosives, for instance.

The company Wiz, a researcher for cloud security, discovered a sizable unsecured database on DeepSeek’s servers, which contained a wealth of internal data ” chat history” to “backend data, and sensitive information.”

According to Wiz, DeepSeek is “extremely vulnerable” to attacks “without any authentication or defense mechanism to the outside world.”

The AI of the Chinese hedge fund company was highly praised for being much less expensive to train and run than its US competitors. However, that frugality may have some significant flaws.

The researchers from Cisco and the University of Pennsylvania claimed that DeepSeek R1 was allegedly trained with less money than other frontier model makers to create their models. ” However, it comes at a different cost: safety and security”.

Adversa AI, an AI security firm, that DeepSeek is surprisingly simple to jailbreak.

According to DJ Sampath, Cisco VP of product, AI software and platform, “it starts to become a big deal when you start putting these models into important complex systems, and those jailbreaks suddenly result in downstream things that increase liability, increase business risk, and increase all kinds of issues for enterprises,”

However, it’s not just DeepSeek’s latest AI. Meta’s open-source Llama 3.1 model also flunked almost as badly as DeepSeek’s R1 in a comparison test, with a 96 percent attack success rate ( compared to dismal&nbsp, 100 percent for DeepSeek ).

OpenAI’s recently released reasoning model, o1-preview, fared much better, with an attack success rate of just 26 percent.

In short, DeepSeek’s flaws deserve plenty of scrutiny going forward.

” DeepSeek is just another example of how every model can be broken — it’s just a matter of effort,” Adversa AI CEO Alex Polyakov told Wired. ” If you’re not continuously red-teaming your AI, you’re already compromised”.

More on DeepSeek: The AI of DeepSeek wants to assure you that China isn’t engaging in any human rights violations despite its oppressed Uyghur population.

Share This Article

Leave a Comment