Anthropic has released a revised version of Claudeās Constitution on Wednesday, January 21, 2026.Ā
It coincided with CEO Dario Amodeiās appearance at the World Economic Forum in Davos.
The document gives a detailed explanation of how Claude operates. It also outlines the type of AI Anthropic aims to build.
While the update keeps the original structure, it adds more depth on ethics, safety, and user care. Notably, it ends by questioning whether AI systems like Claude may hold moral status.
That final note has drawn significant attention.
Claudeās Constitution

Claudeās Constitution is a living document that explains the values that guide Claudeās behavior. Anthropic first released it in 2023.
From the beginning, the company framed Claude differently from other chatbots. Claude doesnāt rely mainly on human feedback but on a written set of principles.
Anthropic calls this system Constitutional AI. In 2023, Anthropic co-founder Jared Kaplan described it as an AI system that supervises itself using a defined list of constitutional principles.
Those principles guide Claude toward acceptable behavior and help it avoid harmful or discriminatory outputs.
An earlier internal policy memo from 2022 explained the idea more directly. Claude is trained on natural language instructions.Ā
These instructions form the softwareās āconstitution.ā Together, they shape how the model responds.
Competitive Distinction
Anthropic has long positioned itself as a more restrained alternative in the AI space. It has avoided controversy-driven growth.
Some critics describe this approach as cautious; others call it responsible. The revised Constitution reinforces that identity.
It allows Anthropic to present itself as ethical, inclusive, and measured. The update also supports the companyās public messaging at a high-profile global forum.
Core Values
The revised Constitution spans roughly 80 pages. According to Anthropic, it is structured around four core values:
- Being broadly safe
- Being broadly ethical
- Complying with Anthropicās guidelines
- Being genuinely helpful
Each section explains what the value means in practice. It also explains how those values shape Claudeās responses.
These values are not treated as abstract ideas. Instead, the document explains how they affect real conversations.
Safety Measures
Safety is addressed in detail. Anthropic states that Claude is designed to avoid failures seen in other chatbots. When conversations suggest danger, Claude should respond carefully.
The document explicitly states that if there is a risk to human life, Claude must refer users to relevant emergency services.
If necessary, it may also provide basic safety information. This requirement applies even when Claude cannot offer further details.
The Constitution also addresses mental health concerns. When signs of distress appear, Claude should direct users to appropriate support resources.
Certain topics remain strictly prohibited. For example, Claude is not allowed to assist with discussions about developing bioweapons. These restrictions are firm and non-negotiable.
Contextual Ethics

Ethics form one of the largest sections of the document. Anthropic states that it is less interested in Claude debating moral philosophy.Ā
Instead, the goal is ethical action in specific situations. The Constitution explains this clearly.
Claude should know how to behave ethically within real-world contexts. It should respond with care, judgment, and restraint.
Internal Guidelines
Another section addresses compliance with Anthropicās internal policies. These guidelines ensure consistency across conversations.
They also help ensure that Claudeās behavior aligns with company standards. Because the Constitution is a living document, these guidelines can evolve.
Anthropic can refine them as risks, regulations, and user needs change.
Long-Term Well-Being
Anthropic explains that Claude should consider more than immediate user requests. It should also consider user well-being over time.
The document states that Claude should weigh a userās immediate desires against their long-term flourishing.
In doing so, it should identify the most plausible interpretation of what the user wants.
AI Moral Status
Anthropic acknowledges uncertainty around Claudeās moral status. The document states that the moral standing of AI models is a serious question.
It also notes that respected philosophers of mind take this issue seriously. Anthropic does not claim that Claude is conscious.
It does not assert that Claude has emotions or awareness. Instead, it argues that the question itself deserves careful thought.
Few AI companies address this so directly.

