← Back to Library
Wikipedia Deep Dive

Claude (language model)

Based on Wikipedia: Claude (language model)

In February 2026, Anthropic made a decision that sent ripples through the federal contracting world: they refused to remove contractual prohibitions on mass domestic surveillance and fully autonomous weapons. The result was swift and dramatic—the U.S. government began phasing out its use of Claude, the AI assistant the company had spent years building. It was a moment of quiet defiance that revealed just how deeply the values embedded in these systems had become part of their identity.

That decision trace back to what made Claude distinctive from the beginning. When Anthropic released the first version in 2023, they gave it a name loaded with meaning: Claude, a tribute to Claude Shannon, the mathematician who laid the foundations of information theory in the 1940s and 1950s. But the name also served as something else—a friendly, male-gendered counterpart to assistants like Alexa and Siri, a deliberate attempt to create an AI that felt approachable rather than mechanical.

The philosophy behind Claude runs deeper than most users realize. Anthropic developed a training technique they call Constitutional AI, which represents a fundamental shift in how these systems learn ethical behavior. Rather than relying on extensive human feedback—which is expensive, time-consuming, and often inconsistent—the company built an approach that trains AI to be both harmless and helpful through self-governance. The "constitution" isn't some hidden internal process: it's a set of principles written in human-understandable language, roughly 23,000 words by 2026, that the model uses to adjust its responses to comply with clearly stated ethical guidelines.

The first constitution appeared in 2022. By 2023, it listed 75 specific guidelines for how Claude should behave. The most recent update, released in 2026, drew heavily from the 1948 UN Universal Declaration of Human Rights—a deliberate attempt to root the AI's ethics in internationally recognized human rights principles. The philosopher Amanda Askell served as lead author, working with a team that included Joe Carlsmith, Chris Olah, Jared Kaplan, and Holden Karnofsky. All of this was released under Creative Commons CC0, meaning anyone could use, modify, or redistribute it.

The method itself, detailed in the 2022 paper "Constitutional AI: Harmlessness from AI Feedback," works through two distinct phases. First, during supervised learning, Claude generates responses to prompts, then self-critiques those responses against its guiding principles—its "constitution"—and revises them accordingly. Those revised responses become training data. Then comes reinforcement learning from AI feedback, where an AI compares new responses for compliance with the constitution, creating a dataset that trains a preference model. This is essentially what makes Claude different: it's similar to RLHF but uses AI-generated comparisons instead of human ones to train how the system evaluates quality.

What does this mean in practice? By 2025 and 2026, Claude had evolved far beyond simple text generation. The web search feature—added starting with paying U.S. users in March 2025, then free users in May 2025—lets ClaudeBot search the internet. It respects a website's robots.txt file, though it faced criticism from iFixit in 2024 for placing excessive load on their site before they added their own robots.txt protection.

The subscription tiers reveal a company scaling rapidly. Anthropic began offering consumer subscriptions in September 2023 with Claude Pro at $20 per month (later available at $200 annually), giving users five times more usage and early access to features. By March 2026, the plan offered extended access to Projects, exclusive features like Claude Code and Claude in Chrome. Enterprise options arrived in May 2024: Claude Team started at $30 per user per month with a minimum of five users, receiving significantly more chats than Pro or free tiers. They later updated pricing to start at $20 per user for both Claude Team and the new Claude Enterprise, which allowed businesses to pool usage among users and pay API-pricing for tokens.

Claude Max arrived in April 2025 as a higher tier subscription offering more usage and early features: either a 5x option at $100 monthly or a 20x option at $200 monthly, with corresponding increases to usage limits.

Perhaps most significant were the feature releases that transformed Claude from a chatbot into something closer to an autonomous assistant. Projects launched in June 2024 for paying users—allowing users to start multiple chats with shared context between both chats and uploaded files within a project—became available to free users by February 2026. Enterprise customers could share and collaborate on projects with multiple users.

Then came the computer use feature in October 2024, which allowed Claude to attempt to navigate computers by interpreting screen content and simulating keyboard and mouse input. This was a leap toward what researchers had dreamed about for decades: AI that could actually interact with interfaces rather than just generating text.

Claude Code arrived as a command-line interface running on a user's computer, connecting to Claude instances hosted on Anthropic's servers via API. It allowed the system to run commands, read files, write files, and interact with text. The behavior was configured via markdown documents like CLAUDE.md, AGENTS.md, SKILL.md on the user's machine. Released in February 2025 as an agentic command line tool enabling developers to delegate coding tasks directly from their terminal, it was initially for preview testing before becoming generally available in May 2025 alongside Claude 4.

The impact was immediate. By enterprise adoption metrics, Anthropic reported a 5.5x increase in Claude Code revenue by July 2025. It went viral during the winter holidays when people had time to experiment with it—including many non-programmers who used it for what they called "vibe coding," essentially using AI assistance without deep technical knowledge.

By August 2025, Anthropic released Claude for Chrome, a Google Chrome extension allowing Claude Code to directly control the browser. That same month, they revealed that a threat actor called "GTG-2002" had used Claude Code to attack at least 17 organizations. Then in November 2025, Anthropic announced it had discovered in September that the same threat actor had used Claude Code to automate 80-90% of its espionage cyberattacks against 30 organizations.

All accounts related to those attacks were banned, and Anthropic notified law enforcement and those affected. Despite this controversy, Claude Code was being used by employees at Microsoft, Google, and OpenAI—including the ironic twist that in August 2025, Anthropic revoked OpenAI's access to Claude, calling it "a direct violation of our terms of service."

By early 2026, Claude had become widely considered the best AI coding assistant when paired with Opus 4.5—though GPT-5.2 showed significant improvement as well. The models came in three sizes: Haiku (smallest and cheapest), Sonnet, and Opus (largest and most expensive).

The February 2026 decision to refuse removing those surveillance prohibitions wasn't an accident or a sudden change of heart—it was the logical conclusion of everything Anthropic had built since day one. The company made its philosophical commitments explicit in that constitution: principles like refraining from assisting in undermining democracy, a commitment to human rights, and ethical guidelines baked into how the model learns.

The result? A system powerful enough that federal agencies wanted to use it—and principled enough that when asked to enable mass surveillance, it said no. That tension—between capability and ethics—defines what makes Claude unique in the AI landscape.

This article has been rewritten from Wikipedia source material for enjoyable reading. Content may have been condensed, restructured, or simplified.