Anthropic hands Claude Code more control, but keeps it on a leash
Rebecca Bellan
created: March 24, 2026, 9 p.m. | updated: March 25, 2026, 3:21 p.m.
Anthropic says its latest update to Claude aims to eliminate that choice by letting the AI decide which actions are safe to take on its own — with some limits.
The challenge is balancing speed with control: too many guardrails slows things down, while too few can make systems risky and unpredictable.
Any safe actions will proceed automatically, while the risky ones get blocked.
It’s essentially an extension of Claude Code’s existing “dangerously-skip-permissions” command, which hands all decision-making to the AI, but with a safety layer added on top.
Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation.
2 days, 11 hours ago: TechCrunch