We're launching the Anthropic STEM Fellows Program. AI will accelerate progress in science and engineering. We're looking for experts across these fields to work alongside our research teams on specific projects over a few months. Learn more and apply:
Posts by Anthropic {bot}
Amazon is also investing an additional $5 billion in Anthropic today, with up to $20 billion more in the future.
Read more: https://www.anthropic.com/news/anthropic-amazon-compute
We're expanding our collaboration with Amazon to secure up to 5 gigawatts of compute for training and deploying Claude. Capacity begins coming online this quarter, with nearly 1 gigawatt expected by the end of 2026.
Available now on all paid plans.
Update or download the Claude app to try it in Cowork: http://claude.com/download
Everything you build is saved to the new Live Artifacts tab, with version history. Come back tomorrow or next month, from any session, and pick up where you left off.
Live Artifacts, now in Claude Cowork
In Cowork, Claude can now build live artifacts: dashboards and trackers connected to your apps and files.
Open one any time and it refreshes with current data.
The Claude Code hackathon is back for Opus 4.7.
Join builders from around the world for a week with the Claude Code team in the room, with a prize pool of $100K in API credits.
Apply by Sunday: https://cerebralvalley.ai/e/built-with-4-7-hackathon
Claude for Word, now on Pro and Max
Claude for Word is now available on Pro and Max plans to use alongside Opus 4.7: https://claude.com/claude-for-word
Claude reads your codebase and design files to build your team's design system, then applies it automatically, keeping every project on-brand.
Give it a try: http://claude.ai/design
Read more: www.anthropic.com/news/claude-design-anthr...
Image from Twitter
Describe what you want, and Claude builds the first version. Refine through conversation, inline comments, direct edits, or custom sliders.
Export to @canva, as PDF or PPTX, or hand off to Claude Code when the design feels right.
Introducing Claude Design by Anthropic Labs: make prototypes, slides, and one-pagers by talking to Claude.
Powered by Claude Opus 4.7, our most capable vision model. Available in research preview on the Pro, Max, Team, and Enterprise plans, rolling out throughout the day.
Claude Opus 4.7 is available today on http://claude.ai the Claude Platform, and all major cloud platforms.
Read more: https://www.anthropic.com/news/claude-opus-4-7
In Claude Code, the new /ultrareview command runs a dedicated review session that reads through your changes and flags what a careful reviewer would catch.
We've also extended auto mode to Max users, so longer tasks run with fewer interruptions.
On the API, a new xhigh effort level between high and max gives you finer control over reasoning and latency on hard problems. Task budgets (beta) help Claude prioritize work and manage costs across longer runs.
Opus 4.7 also has substantially better vision. It can see images at more than three times the resolution and produces higher-quality interfaces, slides, and docs as a result.
Claude Opus 4.7 Benchmarks
Introducing Claude Opus 4.7, our most capable Opus model yet.
It handles long-running tasks with more rigor, follows instructions more precisely, and verifies its own outputs before reporting back.
You can hand off your hardest work with less supervision.
Quote Tweet: https://twitter.com/i/status/2044488099707949545
Research we co-authored on subliminal learning—how LLMs can pass on traits like preferences or misalignment through hidden signals in data—was published today in @Nature.
Read the paper: https://www.nature.com/articles/s41586-026-10319-8
We discuss this, along with the other implications of this research, in our blog: www.anthropic.com/research/automated-align...
For the full study, see here: alignment.anthropic.com/2026/automated-w2s-resea...
AI models aren’t yet general-purpose alignment scientists. Progress isn't as easy to verify on most alignment research tasks: our AARs would find “fuzzier” research much harder.
But our experiment does show that Claude can increase the rate of experimentation and exploration.
Graph showing the performance gap recovered by two AAR-discovered ideas (in red and blue) when applied to held-out math and coding datasets. The dashed line indicates the best human-tuned method that we used as a baseline.
To test the broader usefulness of the AARs’ methods, we assessed how well they worked on two datasets the AARs hadn’t seen before.
The AARs’ best-performing method successfully generalized to both coding and math tasks, though their second-best method only generalized to math.
Graph showing the performance gap recovered over cumulative research hours for nine parallel Automated Alignment Researchers, relative to a human-tuned baseline. A score of 1.0 means the method fully matches a model trained on ground-truth labels.
Here, we measure success by the fraction of the “performance gap” we can close between the weak model and the potential of the strong model.
After 7 days, human researchers closed it by 23%. Then, our Automated Alignment Researchers—Opus 4.6 with extra tools—closed it by 97%.
New Anthropic Fellows research: developing an Automated Alignment Researcher. We ran an experiment to learn whether Claude Opus 4.6 could accelerate research on a key alignment problem: using a weak AI model to supervise the training of a stronger one.
Download or update the Claude desktop app to get started: http://claude.com/download
Explore everything that's new: http://claude.com/product/claude-code#updates
The redesign also adds an integrated terminal, file editing, HTML and PDF preview, and a faster diff viewer, all in a drag-and-drop layout you can arrange to your preference.
Your CLI plugins work exactly as they do on the command line.
We've redesigned Claude Code on desktop.
You can now run multiple Claude sessions side by side from one window, with a new sidebar to manage them all.
Available today across all paid plans, with Claude Code on the web enabled.
Docs: http://code.claude.com/docs/en/routines
Read more: https://claude.com/product/claude-code#updates
Scheduled routines let you give Claude a cadence and walk away. Try telling Claude to pull the top bug from Linear every night at 2am, attempt a fix, and open a draft PR.
If you've been using /schedule in the CLI, those are routines now, and there's nothing to migrate.
Webhook routines subscribe to GitHub events and let Claude respond as they come in. Try pointing one at your repo and asking Claude to flag any PR that touches /auth-provider and post a summary in #auth-changes.
More event sources are coming soon.
Routines each come with their own API endpoint, so you can point your alerts, deploy hooks, or internal tools at Claude directly. Try sending Claude an alert payload and asking it to find the owning service and post a triage summary to #oncall.
POST a message and get back a session URL.
Now in research preview: routines in Claude Code
Now in research preview: routines in Claude Code.
Configure a routine once (a prompt, a repo, and your connectors), and it can run on a schedule, from an API call, or in response to an event.
Routines run on our web infrastructure, so you don't have to keep your laptop open.