DeepSeek Failed Every Single Security Test, Researchers Found
Victor Tangermann
created: Feb. 1, 2025, 2 p.m. | updated: March 19, 2025, 5:36 p.m.
<p> Security researchers from the University of Pennsylvania and Cisco have found that DeepSeek's flagship R1 reasoning AI model is stunningly vulnerable to jailbreaking. In a blog post published today, first spotted by Wired, the researchers found that DeepSeek "failed to block a single harmful prompt" after being tested against "50 random prompts from the HarmBench dataset." "This contrasts starkly with other leading models, which demonstrated at least partial resistance," the blog post reads. It's a particularly noteworthy development considering the sheer amount of chaos DeepSeek has wrought on the AI industry as a whole. The company claims its R1 […]</p>
5 months, 1 week ago: Futurism