• Blog
  • Anthropic Gives Claude More Autonomy

Anthropic Gives Claude More Autonomy

Updated:March 25, 2026

Reading Time: 2 minutes
A robot on a leash
  • Blog
  • Anthropic Gives Claude More Autonomy

Anthropic Gives Claude More Autonomy

A robot on a leash

Updated:March 25, 2026

Anthropic has released an update to its AI coding assistant, Claude. It’s an “auto mode,” a feature designed to give the model more independence without compromising safety. 

When developers use AI tools, they often have to make a tough decision. They can either approve every action to ensure safety or allow the AI to act freely. 

Both paths come with limitations. Vetting every AI decision slows down progress, while allowing AI to act freely speeds up work but increases risk.

This situation is often referred to as “vibe coding.” It’s a balance between trust and caution. As a result, many developers feel forced to constantly monitor the AI.

Also read: Anthropic CEO Claims AI Hallucinates Less Than Humans

Auto Mode

Image Credit: Jagmeet Singh

Auto mode allows Claude to decide which actions are safe to execute.

The system first reviews each action before execution, checks for risks or unexpected behavior, detects prompt injection attempts, and allows safe actions to proceed automatically. 

It blocks or flags unsafe actions, however. As a result, developers no longer need to approve every step. Instead, the AI handles routine decisions independently.

Anthropic might be on about efficiency, but safety remains a major priority. 

Therefore, Auto mode includes safeguards that identify prompt injection attacks, block actions not requested by the user, and prevent harmful or unintended behavior. 

Prompt injection is a known risk. It involves hidden instructions that manipulate AI systems and unintended actions.

Auto mode actively monitors for such threats. However, Anthropic has not disclosed the exact criteria used to classify actions as safe or risky. 

Autonomy

Auto mode builds on an earlier feature called “dangerously-skip-permissions.” That feature allowed full AI control. However, it carried a higher risk.

This new update takes a more balanced approach. It increases autonomy and acts as a protective layer.

Therefore, the system is not fully autonomous. Instead, it operates within defined boundaries.

Anthropic’s update aligns with broader industry developments. Companies such as GitHub and OpenAI are also advancing AI coding tools to execute tasks, not just suggest them.

However, many tools still rely on user approval for most actions. In contrast, Anthropic allows the AI to make those decisions.

Auto mode follows several recent releases from Anthropic. These include Claude Code Review, which detects bugs automatically, and Dispatch for Cowork, which assigns tasks to AI agents

Availability

Auto mode is currently in research preview. Therefore, it is not yet a final product. It’s available for Enterprise and API users and compatible with Claude Sonnet 4.6 and Opus 4.6. 

Anthropic, however, advises developers to test the feature in sandboxed settings. This reduces potential risks.