Image missing.
Anthropic has new rules for a more dangerous AI landscape

Emma Roth

created: Aug. 15, 2025, 5:05 p.m. | updated: Aug. 16, 2025, 12:23 p.m.

Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude. Anthropic doesn’t highlight the tweaks made to its weapons policy in the post summarizing its changes, but a comparison between the company’s old usage policy and its new one reveals a notable difference. In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. This section includes rules against using Claude to discover or exploit vulnerabilities, create or distribute malware, develop tools for denial-of-service attacks, and more.

19 hours, 28 minutes ago: The Verge