Claude Opus 4.7: Every New Feature Explained (April 2026)
Claude Opus 4.7 arrived April 16 with higher-resolution vision, a new effort level, task budgets, and a step-change in agentic coding performance.
What Changed in Claude Opus 4.7
Anthropic released Claude Opus 4.7 on April 16, 2026 — two and a half months after Opus 4.6 shipped in February. The update is not a full generational leap, but it closes specific gaps that frustrated developers and power users: vision accuracy on detailed images, agentic coding reliability, and the lack of fine-grained control over reasoning depth. At the same price as its predecessor ($5/$25 per million tokens), Opus 4.7 is a meaningful upgrade for anyone already running Opus 4.6 in production.
Quick summary: Claude Opus 4.7 is Anthropic's most capable publicly available model as of April 2026. It scores 87.6% on SWE-bench Verified (up from 80.8% on Opus 4.6), raises maximum image resolution to 3.75MP, and introduces task budgets and a new xhigh effort level for agentic workflows — all at the same price.
1. High-Resolution Vision: From 1.15MP to 3.75MP
The most visually obvious change in Opus 4.7 is the vision upgrade. Maximum image resolution has jumped from 1,568px (1.15MP) to 2,576px (3.75MP) — a 3x increase in image data the model can process. In practice, this matters for three types of tasks:
- Computer use agents that need to read dense UI elements, small text, or multi-pane layouts without cropping
- Document extraction from scanned PDFs, invoices, and tables with fine print
- Design and prototyping workflows, where Anthropic has also launched Claude Design — a new research preview tool powered by Opus 4.7 that reads your codebase and design files to build a shareable team design system
Previous Claude models at 1.15MP often missed details in screenshots of dense dashboards or spreadsheets. The 3.75MP limit brings Opus 4.7 closer to what professionals actually need for document and interface analysis tasks.
2. The xhigh Effort Level
Claude has supported effort levels for adaptive thinking — letting developers trade response latency for reasoning depth. Opus 4.7 adds a new xhigh setting that sits between the existing high and max options. Claude Code now defaults to xhigh for all subscriber plans.
Anthropic's own benchmark data shows that xhigh at 100k tokens scores 71% on hard coding evaluations — already ahead of Opus 4.6's max effort at 200k tokens. That means you get better results while spending fewer tokens. For cost-sensitive production workflows, this is the most practically important API addition in this release.
The recommended starting points from Anthropic: use high or xhigh for coding and agentic tasks. Reserve max for genuinely novel reasoning problems where you need the ceiling.
3. Task Budgets (Beta)
Task budgets are a new beta feature that lets developers set a hard token ceiling on an agentic loop. The model sees a running countdown of remaining tokens and uses it to prioritize work — finishing the task gracefully as the budget is consumed rather than cutting off mid-step or looping indefinitely.
To enable task budgets, you set the task-budgets-2026-03-13 beta header and add a task_budget object to your output config specifying the token total. Anthropic notes that budgets set too low for a given task may cause the model to complete work less thoroughly or decline the task — so expect to tune the ceiling per use case.
This feature directly addresses a common complaint with agentic AI: unpredictable cost blowouts on open-ended tasks. Task budgets give engineering teams a reliable lever for cost control without sacrificing the autonomy that makes agents useful.
4. Agentic Coding: The Step-Change Upgrade
Anthropic describes the jump in agentic coding performance as a "step-change" over Opus 4.6. The numbers support that framing:
| Benchmark | Opus 4.6 | Opus 4.7 |
|---|---|---|
| SWE-bench Verified | 80.8% | 87.6% |
| SWE-bench Pro | 53.4% | 64.3% |
| CursorBench | 58% | 70% |
Beyond benchmark scores, Anthropic specifically improved reliability on long-running multi-session agentic work. Agents that write to and read from scratchpads or notes files across long sessions now hold context more reliably — a real fix for workflows that previously lost state mid-task. The /ultrareview command is also new in Claude Code, with three free review credits offered at launch for all users.
5. New Tokenizer (and What It Costs You)
Opus 4.7 ships with a new tokenizer that improves performance across a range of tasks — but it is also a breaking change for anyone tracking token usage tightly. The new tokenizer uses between 1x and 1.35x as many tokens compared to Opus 4.6 depending on the content type. That means the same prompt that cost you 10,000 tokens on Opus 4.6 could cost up to 13,500 tokens on Opus 4.7.
The /v1/messages/count_tokens endpoint reflects the new counts, so you can audit your prompts before migrating. If you are running high-volume workloads and the 35% token overhead is a concern, Anthropic recommends evaluating Claude Sonnet 4.6 at $3/$15 per million tokens as a cost-efficient alternative for tasks that do not need frontier coding performance.
6. Thinking Content Is Now Hidden by Default
A quieter but important behavior change: starting with Opus 4.7, thinking blocks still appear in the response stream, but the thinking field is empty by default. Latency improves slightly as a result. If your product streams reasoning to users or depends on thinking output for logging or debugging, you need to explicitly opt back in by setting "display": "summarized" in your output config. No error is raised if you miss this — it just silently omits the reasoning, which could cause a noticeable pause before output begins.
7. Automated Cybersecurity Safeguards
Opus 4.7 is the first Claude model to ship with automated detection and blocking for prohibited cybersecurity uses. Anthropic deliberately trained the model to have lower offensive cyber capabilities than its unreleased Claude Mythos Preview — Opus 4.7 scores 73.1% on the CyberGym vulnerability reproduction benchmark, well above GPT-5.4's 66.3% but below Mythos Preview's 83.1%. Security professionals with legitimate use cases can apply through Anthropic's formal verification program.
Who Opus 4.7 Is NOT For
If your workload is cost-sensitive and does not require frontier coding performance, Sonnet 4.6 remains the smarter choice. The new tokenizer's 35% overhead on token count means Opus 4.7 can get meaningfully more expensive on high-volume tasks — even at the same per-token price. If your agents rely on BrowseComp-style web research performance, Opus 4.7 actually scores lower than Opus 4.6 on that specific benchmark, so do not assume it is a universal upgrade. And if you have heavily tuned prompts for Opus 4.6's interpretive behavior, expect to rewrite them — Opus 4.7's stricter instruction-following can break prompts that relied on the older model filling in gaps.
Where to Access Claude Opus 4.7
Opus 4.7 is available on claude.ai across Pro, Max, Team, and Enterprise plans. It is also accessible via the Anthropic API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, and GitHub Copilot (Pro+, Business, and Enterprise tiers). Pricing is unchanged from Opus 4.6. Browse more AI assistant tools on SwitchTools for alternatives and comparisons.
Frequently Asked Questions
What is new in Claude Opus 4.7 compared to Opus 4.6?
Opus 4.7 adds 3.75MP high-resolution vision (up from 1.15MP), a new xhigh effort level for agentic tasks, task budgets in beta for token-controlled agentic loops, major coding benchmark improvements, and a new tokenizer. Thinking content is now hidden by default, and automated cybersecurity safeguards have been added at the model level.
Does Claude Opus 4.7 cost more than Opus 4.6?
The per-token price is unchanged at $5 per million input tokens and $25 per million output tokens. However, the new tokenizer uses up to 35% more tokens on the same prompts compared to Opus 4.6, so your effective spend may increase even though the rate stays the same. Run your key prompts through the token counter before migrating.
Can I use Claude Opus 4.7 through Amazon or Google Cloud?
Yes. Opus 4.7 is available on Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry in addition to the Anthropic API and claude.ai. Availability rolled out across platforms on April 16, 2026. Regional and global endpoint options vary by cloud provider.
How does the xhigh effort level work in Claude Opus 4.7?
The xhigh effort level sits between the existing high and max settings and gives finer control over the reasoning depth versus latency tradeoff. Claude Code defaults to xhigh for all plans. Anthropic's data shows xhigh at 100k tokens already beats Opus 4.6's max at 200k tokens on coding benchmarks, making it the recommended default for most production agentic workflows.
Is Claude Opus 4.7 the most capable Claude model available?
It is the most capable publicly available model from Anthropic as of April 2026. Claude Mythos Preview is more capable overall, but it is restricted to an invitation-only research preview for defensive cybersecurity use cases through Project Glasswing. There is no self-serve access to Mythos Preview at this time.