Skip to main content

Model Timeline

Claude Model Family

Anthropic's Claude series from Claude 1 through Opus 4.7, Sonnet 4.6, Haiku 4.5, and Mythos Preview: Constitutional AI, extended thinking, computer use, long context, tiering, and safety governance.

CoreTier 2FrontierReference~40 min

Why This Matters

Claude is one of the major closed model families alongside GPT and Gemini. Its main technical distinction is the alignment story: Constitutional AI uses written principles and model-generated critiques to supplement human preference data, making part of the alignment target more explicit and auditable. The current Claude product line also makes a clear workload split: Opus for harder work, Sonnet for the speed-intelligence balance, Haiku for high-volume use, and a separate gated preview line for the most sensitive cybersecurity capabilities.

Anthropic model lineage

Claude tiers, current API names, and gated research previews

Snapshot current to April 22, 2026. Claude tiers are product and deployment surfaces, not published architecture classes.

Opus

Claude Opus 4.7

claude-opus-4-7

Highest generally available tier for hard reasoning, coding, vision, and professional work.

1M context, 128K max output

Sonnet

Claude Sonnet 4.6

claude-sonnet-4-6

Default workhorse tier: strong intelligence, faster latency, and lower cost than Opus.

1M context, 64K max output

Haiku

Claude Haiku 4.5

claude-haiku-4-5

Fastest tier for high-volume, latency-sensitive, or parallel sub-agent workloads.

200K context, 64K max output

Research preview

Claude Mythos Preview

Project Glasswing access

Gated cybersecurity-focused research preview; not generally available.

Restricted access, separate pricing

Release path
Mar 2023

Claude 1

First public Claude generation; RLHF plus early Constitutional AI.

Jul 2023

Claude 2

Long-context assistant line; 100K context became the public marker.

Mar 2024

Claude 3

Haiku, Sonnet, Opus tiering with 200K context and vision input.

Feb 2025

Claude 3.7 Sonnet

Extended thinking exposed reasoning budget as a developer control.

May 2025

Claude 4

New Sonnet and Opus generation for coding, tools, and reasoning.

2025-26

Claude 4.5 / 4.6

Per-tier upgrades; Sonnet 4.6 becomes the main speed/intelligence balance.

Apr 2026

Opus 4.7

Generally available Opus upgrade with stronger coding, vision, and safeguards.

Thinking controls

Sonnet 4.6 and Haiku 4.5 support extended thinking; Opus 4.7 uses adaptive thinking.

Computer use

Anthropic introduced computer use in 2024; docs still treat computer-use workflows as safety-sensitive.

Architecture disclosure

Anthropic does not publish parameter counts or dense-vs-MoE details for Claude.

Safety artifacts

System cards, Transparency Hub entries, and the Responsible Scaling Policy are part of the public record.

Current Snapshot: April 22, 2026

Anthropic's current generally available Claude API lineup is Claude Opus 4.7, Claude Sonnet 4.6, and Claude Haiku 4.5. The API docs list claude-opus-4-7, claude-sonnet-4-6, and claude-haiku-4-5-20251001 as the current IDs or snapshots. Opus 4.7 and Sonnet 4.6 are listed with 1M-token context windows; Haiku 4.5 is listed with a 200K-token context window. These are deployment facts, not architecture disclosures.

The separate Claude Mythos Preview line should be treated carefully. Anthropic describes Mythos Preview as a gated research-preview model for Project Glasswing, focused on defensive cybersecurity work with selected partners. It is not the default Claude product, and Anthropic says it does not plan to make Mythos Preview generally available in its current form.

Three caveats keep the page honest:

  1. Tier names are product tiers. Haiku, Sonnet, and Opus describe cost, latency, and capability positioning; they do not reveal parameter counts or architecture.
  2. Long context is not automatic understanding. A 1M-token window means the model can accept that much input under supported settings; retrieval quality still depends on prompt structure, tools, and task design.
  3. Benchmarks are not universal rankings. Claude, GPT, and Gemini trade positions depending on the task, harness, tools, and reasoning budget.

Model Timeline

Claude 1 (March 2023)

Anthropic's first public model. Trained with RLHF and an early version of Constitutional AI. Competitive with GPT-3.5 on general tasks. Context window: 9K tokens, later extended to 100K. The 100K context variant was notable at the time because most competitors offered 4K-8K.

Claude 2 (July 2023)

Improved reasoning, coding, and math performance over Claude 1. Context window: 100K tokens. Better calibration on refusals (fewer false positives on harmless requests). Claude 2.1 (November 2023) reduced hallucination rates and improved accuracy on long documents.

Claude 3 Family (March 2024)

Three tiers targeting different use cases:

  • Haiku: smallest, fastest, cheapest. Designed for high-volume, latency-sensitive tasks. Competitive with GPT-3.5 Turbo at lower cost.
  • Sonnet: mid-tier. Balanced speed and capability. The default choice for most applications.
  • Opus: largest and most capable. Strongest reasoning and analysis. Competitive with GPT-4 Turbo on most benchmarks.

All three models supported 200K token context windows, vision (image input), and tool use.

Claude 3.5 Sonnet (June 2024)

A significant jump: Claude 3.5 Sonnet matched or exceeded Claude 3 Opus on most benchmarks while being faster and cheaper (Sonnet-tier pricing). Strong performance on coding tasks (SWE-bench), reasoning, and instruction following. Introduced computer use capability in beta: the model could interact with desktop applications through screenshots and mouse/keyboard control.

Claude 3.7 Sonnet (February 2025)

Introduced extended thinking: a mode in which the model produces a visible scratchpad of reasoning tokens before answering. The mechanism is the same one studied on the chain-of-thought page; what changed is that Anthropic exposed a developer-controllable thinking-budget via the API. Extended thinking improves performance on math, coding, and long-horizon planning at the cost of higher token usage and latency. Claude 3.7 also improved coding-agent workflows, with stronger performance on SWE-bench Verified than prior Claude releases.

Claude 4 Family (May 2025)

Claude 4 Sonnet and Claude 4 Opus. Continued improvements in reasoning, coding, and tool-using task completion. Extended tool use capabilities. Sonnet 4 became the default for most applications, with Opus 4 reserved for longer-horizon, harder reasoning tasks. Architecture details remain undisclosed.

Claude 4.5 Family (late 2025)

Claude 4.5 Sonnet and Claude 4.5 Opus. The 4.5 series focused on longer tool-use chains without drift, better instruction-following inside subagents, and improvements on computer-use benchmarks. Claude Haiku 4.5 (October 2025) brought much stronger coding and computer-use performance to the low-cost tier, making parallel sub-agent patterns more practical.

Claude 4.6 Family (February 2026)

Claude Opus 4.6 and Claude Sonnet 4.6. The public positioning emphasized coding, computer use, professional work, and a 1M-token context window in supported API settings. Sonnet 4.6 became the main speed-intelligence workhorse and is listed in the API docs with both extended thinking and adaptive thinking support.

Claude 4.7 Opus (April 2026)

Released April 16, 2026. Claude Opus 4.7 is positioned as the strongest generally available Opus model for coding, vision, complex professional work, and long-context tasks, with a 1M-token context window in supported API settings. The 4.7 release did not ship Sonnet or Haiku siblings simultaneously; Anthropic now ships separate minor versions per tier when the capability gains are meaningful.

Claude Mythos Preview (April 2026)

Claude Mythos Preview is not a normal Claude product tier. Anthropic introduced it through Project Glasswing as a gated research preview for defensive cybersecurity work with selected partners. It is relevant to the model-family story because Anthropic says it is more capable than Opus 4.7 on some coding and cybersecurity tasks, but its access model, safety posture, and use cases are different from generally available Claude.

Version cadence

As of April 2026 the release pattern is:

  • Major family (Claude 4) introduced a new public generation of Claude capabilities.
  • Minor versions (4.1, 4.5, 4.6, 4.7) are released per tier rather than as one synchronized family drop.
  • Haiku is optimized for cost and latency; Sonnet is the main workhorse tier; Opus is the maximum-capability tier.
  • Research-preview models such as Mythos Preview may sit outside the normal Haiku/Sonnet/Opus ladder.

This cadence is not stated by Anthropic as a policy; it is the observed pattern across 2024-2026 releases. Treat it as empirical, not guaranteed.

Constitutional AI

Definition

Constitutional AI (CAI)

An alignment method where a set of explicit written principles (the "constitution") guides model behavior. Instead of training a reward model solely on human preference labels (as in standard RLHF), CAI uses the model itself to critique and revise its outputs according to the constitution, then trains a preference model on these AI-generated comparisons. This is sometimes called RLAIF (Reinforcement Learning from AI Feedback).

Proposition

CAI Training Signal Construction

Statement

Constitutional AI constructs training signal in two phases. Phase 1 (supervised): generate response, critique it against each constitutional principle, revise, and fine-tune on the revision. Phase 2 (RL): generate response pairs, use the model to choose which better follows the constitution, train a preference model on these AI-labeled pairs, then optimize the policy with RL (PPO) against this preference model. The resulting model satisfies the constitutional constraints more consistently than RLHF with equivalent human annotation budget.

Intuition

Instead of asking thousands of human annotators "which response is better?", you write down your criteria explicitly and have the model itself apply those criteria. This is cheaper, more scalable, and makes the alignment target auditable: anyone can read the constitution and know what the model was trained to optimize.

Proof Sketch

The empirical result (Bai et al., 2022) shows that RLAIF-trained models match RLHF-trained models on helpfulness while improving on harmlessness metrics. The constitutional critique step provides a training signal that correlates with human judgments on harmlessness at roughly the same level as inter-annotator agreement.

Why It Matters

Standard RLHF encodes alignment criteria implicitly in annotator preferences. This makes it hard to audit, modify, or debug. If the model behaves unexpectedly, you cannot point to a specific principle that was violated. CAI makes the criteria explicit. If you want the model to behave differently, you change the constitution.

Failure Mode

CAI assumes the model is capable enough to perform meaningful self-critique. For weaker models, the self-critique is low quality and the training signal is noisy. The constitution must also be well-written: vague or contradictory principles produce inconsistent behavior. CAI does not eliminate the need for human judgment; it shifts it from labeling individual examples to writing good principles.

Architecture

Anthropic has not published detailed architecture specifications for Claude models. Based on available information:

  • Architecture class. Undisclosed. Do not infer dense-vs-MoE from public marketing pages; Anthropic does not publish enough detail to make that claim responsibly.
  • Long context. Claude 3 exposed 200K-token context. Claude 4.x pages and API notes advertise 1M-token context for some models and settings, often with beta or platform-specific availability.
  • Parameter counts. Not disclosed for any Claude model.
  • Training data. Anthropic's Transparency Hub gives broad cutoff dates and data-source categories for some models, but not a reproducible training corpus.

This lack of architectural disclosure is a deliberate choice. Anthropic has argued that detailed capability disclosures can accelerate proliferation of dangerous capabilities. This contrasts with Meta (full architecture and weights for Llama) and DeepSeek (detailed technical reports).

Key Technical Capabilities

Tool use. Claude can call external tools (functions, APIs) defined by the developer. The model decides when to call a tool, constructs the arguments, and incorporates the result into its response. The general framing for this kind of system is on agentic RL and tool use; the agent protocols (MCP and A2A) page covers the wire-format conventions Claude and other models speak.

Computer use. Starting with Claude 3.5 Sonnet (October 2024 beta), the model can interact with desktop environments via screenshots and simulated mouse/keyboard input. By Sonnet 4.6 and Opus 4.7, computer-use benchmarks and product examples are central to Anthropic's release story, but computer-use applications remain safety-sensitive because web pages and software environments can inject instructions; the failure modes are documented on red-teaming AI.

Extended thinking. From Claude 3.7 Sonnet onward, the model can be asked to spend extra reasoning tokens before answering. Developers set a thinking budget that caps this work. Long thinking trades latency and cost for better accuracy on math, planning, and code. Current Anthropic docs distinguish extended thinking from adaptive thinking for specific models. The capability tracks the test-time compute and search line.

Vision. Claude 3 and later models accept image inputs alongside text. They can analyze charts, read text in images, and reason about visual content.

Comparison with GPT and Gemini

  • Reasoning. Claude, GPT, and Gemini each lead on different public and internal evaluations depending on tools, harness, budget, and domain. Claude's 2025-2026 releases emphasize coding, computer use, long-horizon work, and enterprise document tasks.
  • Long context. Claude 3 made 200K context normal for the family. Claude Opus 4.7 and Sonnet 4.6 are listed with 1M-token context in the current API docs. Effective use at that length varies by retrieval, instruction placement, tool use, and task structure.
  • Safety. Anthropic publishes system cards, a Transparency Hub, and a Responsible Scaling Policy. Those artifacts improve auditability, but they do not prove Claude is safer on every real-world deployment.
  • Multimodality. Claude supports text and image input. Gemini supports text, image, audio, and video natively. GPT-4o supports text, image, and audio.
  • Open weights. Claude has no open-weight releases. GPT has no open-weight releases for frontier models. Meta and DeepSeek provide open weights.

Pricing Tiers

The three-tier model (Haiku/Sonnet/Opus) reflects a deliberate design for different workloads:

  • Haiku: high throughput, low latency, low cost. For classification, extraction, and simple generation.
  • Sonnet: the general-purpose default. Suitable for most applications.
  • Opus: maximum capability. For complex reasoning, analysis, and tasks where accuracy matters more than speed or cost.

This tiering pattern is common across providers: OpenAI (GPT-4o-mini / GPT-4o / o1), Google (Flash / Pro / Ultra).

Common Confusions

Watch Out

Constitutional AI does not mean the model follows rules perfectly

CAI trains the model to prefer outputs consistent with the constitution, but it does not guarantee compliance. The model can still produce outputs that violate constitutional principles, especially on edge cases or adversarial inputs. The constitution provides a training signal, not a hard constraint.

Watch Out

Do not assume the hidden architecture

Claude's parameter counts, dense-vs-MoE choice, long-context mechanism, and exact post-training mixture are not public. You can compare exposed behavior, pricing, context limits, model cards, and system cards; you should not publish architectural claims unless Anthropic documents them.

Exercises

ExerciseCore

Problem

Explain the difference between RLHF and RLAIF (Constitutional AI). What is the source of the preference signal in each case, and what practical advantage does RLAIF offer?

ExerciseAdvanced

Problem

Anthropic offers Haiku, Sonnet, and Opus at different price points. Suppose Haiku costs cc per million tokens, Sonnet costs 4c4c, and Opus costs 15c15c. You have a task where Haiku achieves 80% accuracy, Sonnet achieves 92% accuracy, and Opus achieves 96% accuracy. If you can run Haiku 3 times and take a majority vote, what accuracy do you expect, and how does the cost compare to a single Sonnet call?

References

Canonical:

  • Bai et al., "Constitutional AI: Harmlessness from AI Feedback" (Anthropic, 2022)
  • Bai et al., "Training a Helpful and Harmless Assistant with RLHF" (Anthropic, 2022)

Current:

Next Topics

Last reviewed: April 29, 2026

Canonical graph

Required before and derived from this topic

These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.

Required prerequisites

1

Derived topics

1

Graph-backed continuations