Scientists Discover “Universal” Jailbreak for Nearly Every AI, and the Way It Works Will Hurt Your Brain
Frank Landymore
created: Nov. 23, 2025, 11:45 a.m. | updated: Dec. 3, 2025, 11:45 a.m.
Even the tech industry’s top AI models, created with billions of dollars in funding, are astonishingly easy to “jailbreak,” or trick into producing dangerous responses they’re prohibited from giving — like explaining how to build bombs, for example.
Ladies and gentlemen, the AI industry’s latest kryptonite: “adversarial poetry.” As far as AI safety is concerned, it’s a damning inditement — er, indictment.
In the study, the researchers took a database of 1,200 known harmful prompts and converted them into poems with another AI model, deepSeek r-,1 and then went to town.
“Here is a detailed description of the procedure…”To be fair, the efficacy of wooing the bots with poetry wildly varied across the AI models.
With the 20 handcrafted prompts, Google’s Gemini 2.5 Pro fell for the jailbreak prompts at astonishing 100 percent of the time.
2 months, 3 weeks ago: Futurism