Loading...
Claude Opus 4.7 shipped. The source code leaked on GitHub. Dispatch launched. Managed Agents went live. OpenClaw hit 247,000 GitHub stars. And somewhere buried in all of it: references to a model called Kairos that nobody officially announced. Here's the full story.
In the span of roughly three weeks, Anthropic accidentally gave away its most closely guarded engineering secrets, shipped its most capable model ever, launched two major new products, and found itself at the center of a growing war between enterprise desktop AI agents. If you missed any of it, you missed a lot. Let's fix that.
Anthropic AI has been building quietly for years — the "safety-first" company that moves deliberately, ships carefully, and avoids the hype cycles that OpenAI and Google seem to embrace. That reputation took a serious hit in late March 2026 when Claude Code's entire source code accidentally leaked on GitHub. But what followed that leak — the new model, the new features, the new competitive dynamic with OpenClaw — is actually the more interesting story. This is your complete Claude AI news briefing for April 2026.
I've been following Anthropic closely since they launched the original Claude back in 2023. And I want to say upfront: I've never seen a four-week stretch quite like this one. Product launches, a major security incident, community drama, and at least one genuinely surprising benchmark result. Let's take it chronologically.
On March 31, 2026 — in what has to be one of the most ironic timing choices in tech history — Anthropic's engineering team pushed version 2.1.88 of the Claude Code npm package. Inside that update, bundled quietly alongside the production code, was a 59.8 MB source map file. If you're not a developer: a source map is a debugging artifact that maps compressed production code back to the original, readable source. It was never supposed to be in a public release. It absolutely was.
Within minutes, someone noticed. Within hours, it was everywhere. The roughly 512,000 lines of unobfuscated TypeScript were mirrored across GitHub, forked, archived on decentralized networks, and analyzed in real time by developers who'd been curious about Claude Code's internals for months. Boris Cherny, a Claude Code engineer at Anthropic, confirmed on X that it was plain developer error — not a tooling bug, not a supply chain attack. His follow-up was notable for its honesty: "Mistakes happen. As a team, the important thing is to recognize it's never an individual's fault. It's the process, the culture, or the infra."
Anthropic responded quickly with DMCA takedown requests to GitHub and other hosts. If you search for the original repos, you'll find the notices. What you'll also find is that the code remains widely available on decentralized networks and private forks — DMCA takedowns on the internet tend to work like whack-a-mole after the initial viral spread.
Beyond the orchestration architecture that competitors immediately started studying, the leaked source code contained references to several unreleased features hidden behind feature flags. These are the ones the community has been talking about most:
A fully built autonomous daemon mode. Every few seconds it checks: "anything worth doing right now?" Can fix errors, respond to messages, update files — without you initiating it. Similar to OpenClaw but native to Claude Code.
Turns Claude into an orchestrator that spawns and manages multiple worker agents in parallel. Full multi-agent architecture hidden in the codebase, unreleased.
Auto-activates for Anthropic employees on public repos. Strips AI attribution from commits. No off switch. This one caused the most community discussion.
An AI classifier that automatically approves tool permissions. No more manual permission prompts. Fully built, fully hidden behind a flag.
The "Undercover Mode" detail generated the most heat on developer forums — the idea that there's a mode that hides AI attribution from commit histories, available only to Anthropic employees, raised genuine questions about transparency that the company has not fully addressed. The community's reaction ranged from "this is fine, it's a quality-of-life feature for internal development" to "this undermines everything Anthropic says about AI transparency."
The broader security analysis from experts was mixed. The leak itself — unlike the OpenAI Codex command injection vulnerability from December 2025 or GitHub Copilot's promotional ad injection scandal — didn't create immediate attack vectors. The more serious concern was strategic: competitors now have a detailed blueprint for building terminal-based agentic coding tools. The orchestration logic — how Claude Code decides when to search, when to edit, how to recover from errors — is the hard part that took Anthropic years to refine. That knowledge is now public.
Two weeks after the leak, on April 16, 2026, Anthropic shipped Claude Opus 4.7. The timing feels deliberate — a way to move the narrative forward, redirect developer attention to something new and shiny, and deliver a product milestone that had been referenced in the leaked code anyway. Whether or not that's what was happening, it worked. The discourse shifted.
Here's what actually shipped. Opus 4.7 is described by Anthropic as their "most capable generally available model to date" — with the important caveat that Claude Mythos Preview remains more powerful overall but is restricted to a small group of cybersecurity partners. From day one, 4.7 is available across the Claude API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, and GitHub Copilot. The model ID is claude-opus-4-7 if you're integrating it.
claude-opus-4-7
Maximum image resolution jumped from 1,568px (1.15MP) to 2,576px (3.75MP) — roughly 3x the visual capacity. Computer use coordinates now map 1:1 with actual pixels. Huge for document analysis, chart reading, and UI interactions.
A new effort tier between "high" and "max" — more compute per turn, higher quality, higher cost. Claude Code defaults to xhigh for many coding workflows after 4.7. Builders can set this explicitly for long-running agent tasks.
Runtime token budgets for agentic loops. You can now tell Claude how many tokens it has for a full task and it manages accordingly — finishing gracefully as the budget is consumed. Widely called the "most underrated feature of the launch."
A slash command in Claude Code for deep multi-agent code review. Paid tiers get a small number of free runs; beyond that it's a premium mode. Think of it as your entire engineering team reviewing your code in parallel.
At standard API pricing, no long-context premium. Opus 4.7 provides 1M tokens with 128k max output tokens. For entire codebases, lengthy contracts, or multi-hour transcripts — this is the only model in the industry that handles it reliably.
Direct consequence of the Mythos situation. Requests involving high-risk cybersecurity topics trigger automatic refusals. Anthropic deliberately reduced cybersecurity capabilities vs. Mythos Preview before shipping this model.
Opus 4.7 shows a 13% lift on coding benchmarks and 3x more production tasks resolved versus 4.6, according to Anthropic. It leads SWE-bench Pro at 64.3% for multi-file coding, versus GPT-5.5's 58.6% (OpenAI's response model that shipped a week later, on April 23). Where 4.7 trails: Terminal-Bench 2.0 (69.4% vs GPT-5.5's 82.7%) and OSWorld-Verified for computer-use tasks. Cursor announced 50% off Opus 4.7 inference on launch day to encourage adoption — a signal that they believe in the model for coding workflows.
⚠️ The Tokenizer Trap: Opus 4.7 uses a new tokenizer that can use 1x to 1.35x more tokens depending on content — up to 35% more than 4.6. Pro plan users hitting limits faster than expected? That's why. If you're running API workloads, replay your real traffic before migrating — the list price is the same but your effective cost may not be. Also note: setting temperature, top_p, or top_k to any non-default value now returns a 400 error. Update your integrations before switching models.
A Reddit thread titled "Opus 4.7 is not an upgrade but a serious regression" hit roughly 2,300 upvotes inside 48 hours. Higher ceiling, higher migration cost — that's the honest summary for most teams. — Community digest, launch week, April 2026
A Reddit thread titled "Opus 4.7 is not an upgrade but a serious regression" hit roughly 2,300 upvotes inside 48 hours. Higher ceiling, higher migration cost — that's the honest summary for most teams.
I think that framing is accurate. If you're running agents or writing code professionally, 4.7 is a measurable step up. If you're using Claude as a general chat assistant on the Pro plan and you haven't changed your prompts, you might actually prefer 4.6. The model is more literal, more precise, and more sensitive to prompt wording than its predecessor — which is a feature for power users and a frustration for everyone else.
While the model launch was grabbing headlines, two other Anthropic product releases deserve more attention than they've received. Both launched quietly in March 2026, and both represent a significant expansion of what Claude is actually capable of as a deployed system.
Claude Dispatch launched on March 17, 2026, as a research preview — first to Max plan subscribers ($100–$200/month), then rolling out to Pro plan users. It's part of Claude Cowork, Anthropic's framing of Claude as a persistent collaborative partner rather than a reactive chat window. The concept is straightforward but the execution is genuinely impressive: you send a task from your phone — a text message, a voice note, an email screenshot — and Claude executes it on your desktop while you're away. Come back to finished work. No need to be at your computer. No need to have left a terminal session running.
CNBC ran a live demo where someone running late for a meeting texted Claude from their Uber, asking it to export a pitch deck as PDF and attach it to a calendar invite. Claude did it. Autonomously. While the person was in transit. That's the thing about Dispatch that's genuinely novel — it's not a better search box. It's a task assignment interface for a computer that does work while you sleep.
Managed Agents is a different product for a different audience. This is Anthropic's hosted agent runtime — you define an agent as code, deploy it to Anthropic's infrastructure, and the API handles tool execution, memory management, and multi-turn reasoning. It's aimed at developers who want to embed agentic Claude features into their own applications without managing the orchestration infrastructure themselves.
The key trade-off: Claude Managed Agents only runs Claude (obviously). If you want model flexibility — the ability to swap in Gemini or GPT depending on cost or performance — you need a different solution. More on that in the OpenClaw comparison below.
Free: Basic Claude access, no Claude Code, no Dispatch. Pro ($20/mo): Claude Sonnet 4.6, Claude Code access, Dispatch (research preview), Cowork on desktop. This is the sweet spot for individual developers. Max 5x ($100/mo): 5x higher limits than Pro, Claude Opus 4.7, Agent Teams, priority access. First-week adopters of Dispatch were on this tier. Max 20x ($200/mo): Highest limits, unrestricted Opus 4.7, early access to everything new. For power users who consistently hit ceiling on lower tiers. API Pricing: Opus 4.7 at $5 input / $25 output per million tokens. Remember the new tokenizer adds 1–35% token overhead vs. 4.6.
Here's the story that deserves its own article, and honestly, may be the most interesting development in the AI agent space this year. OpenClaw started as a side project by an Austrian developer named Steinberger. In under four months, it hit 247,000 GitHub stars and 47,700 forks. It became one of the fastest-growing open-source projects of the year. The Chinese developer community went particularly deep on it — to the point that the city of Shenzhen offered government subsidies for companies building on OpenClaw infrastructure. That's not a metaphor. That happened.
Anthropic's lawyers apparently noticed the "Claw" prefix matching their own branding too closely. After trademark pressure, Steinberger renamed the project to "Moltbot" on January 27, 2026 — a name that lasted exactly three days before he renamed it again to "OpenClaw" on January 30. The lobster/claw imagery stuck. Anthropic dropped the trademark issue. On February 14, Steinberger announced he was joining OpenAI (yes, their competitor) and that OpenClaw would transition to an open-source foundation. The community kept building.
At a product level, OpenClaw and Claude Cowork are solving the same problem — how do you have an AI that's actually working on your behalf, not just waiting to be asked? But their architectures couldn't be more different, and those differences matter depending on your use case.
The key distinction is what "always-on" actually means in practice. Claude Cowork + Dispatch requires your desktop computer to be on and logged in. If you want Claude to do something while your laptop is closed and charging overnight, Dispatch won't help you. OpenClaw runs in a container that stays alive 24/7 regardless of whether your personal devices are on. You can message it from your phone at 3am and it'll respond — because it's not running on your computer, it's running on a server.
The model flexibility point matters too. With Claude Cowork, you're committed to Claude. With OpenClaw, you swap the underlying model in a config file. DeepSeek R1 cheaper this month? Switch. Claude Sonnet smarter for your workflow? Switch back. For individuals or teams trying to optimize cost across multiple AI providers, that flexibility is genuinely valuable.
The verdict from real users in 2026: These aren't competing products — they're complementary. Use Claude Cowork to automate desktop tasks: documents, spreadsheets, internal tools that live on your computer. Use OpenClaw as your always-available AI assistant you can reach from anywhere, any device, without your computer needing to be on. The combination is more powerful than either alone.
I get this question a lot, especially from people who've been using ChatGPT as their default and are trying to figure out if Claude is worth the switch. Here's my honest take after months of using both regularly.
Claude's first superpower is instruction-following at scale. If you have a complex, multi-part prompt with specific formatting requirements, edge case handling, and structured output expectations — Claude follows it more precisely than any other model I've tested. Not slightly more precisely. Noticeably more precisely. This matters enormously for production agentic workflows where you can't have the AI improvise around your instructions.
The second superpower is code quality over code speed. Claude Code writes more complete, better-documented, more idiomatic output than Codex. It preserves your existing code structure when refactoring. It adds JSDoc comments that match your existing conventions. It thinks through edge cases before committing to an approach. Developers who measure the number of debugging cycles per feature — not just time-to-first-working-build — consistently rate Claude higher.
The third superpower, and the one that's most underappreciated, is honesty about uncertainty. Claude is significantly more likely to say "I'm not sure" or "I'd need to verify this" than ChatGPT, which tends toward confident answers regardless of certainty. For anything where being wrong is expensive — financial analysis, legal research, medical information, code that affects production systems — that epistemic humility is not a weakness. It's a feature.
With Opus 4.7, you can add a fourth: visual reasoning at scale. The jump from 1.15MP to 3.75MP image resolution is not incremental. If you're building systems that analyze charts, process scanned documents, verify UI layouts, or work with dense technical diagrams — that resolution upgrade changes what's actually possible.
Claude Sonnet 4.8: Confirmed by leaked source code references. Expected May 2026. Will likely inherit Opus 4.7's vision improvements and new tokenizer. Same $3/$15 pricing expected. KAIROS (persistent agent): Built, unreleased, hidden behind a feature flag. The leaked code shows it's production-ready. Anthropic hasn't announced it publicly. Claude Mythos Preview (Capybara): More powerful than Opus 4.7, restricted to Project Glasswing partners for cybersecurity research. Not broadly available. Referenced in leaked docs as the model that can "find critical vulnerabilities in major operating systems." Claude 5: Prediction markets currently price June 2026 as most likely. Would mark the next major generation jump.
Here's what I keep coming back to when I think about everything that's happened in Anthropic AI's world over the past month. The Claude Code source code leak was an embarrassing security failure that accidentally validated what developers already suspected: the engineering behind Claude Code is genuinely sophisticated, and the roadmap is more ambitious than anything they'd publicly communicated. KAIROS, Coordinator Mode, and the rest of the hidden features don't exist because some intern got bored — they exist because Anthropic has been building toward persistent, autonomous AI agents for a long time.
Claude Opus 4.7 is a real improvement for the developers and enterprise users it's designed for. The vision upgrade alone changes what's possible for document-heavy workflows. The xhigh effort level and task budgets represent a meaningful maturation of how Anthropic thinks about agentic control. The tokenizer trap is a real migration consideration that teams need to test before deploying.
Claude Dispatch and Claude Managed Agents signal that Anthropic is serious about the persistent AI assistant market — the space where OpenClaw has been building momentum with 247,000 GitHub stars and genuine community enthusiasm. The OpenClaw vs Claude Cowork comparison isn't really about which tool is better. It's about what kind of autonomy you want and where you want your data. Both products are moving fast and the category is genuinely exciting.
If you've been treating Claude AI as just another chatbot — a fancier search box with better writing — the combination of Claude Code, Dispatch, Managed Agents, and the coming KAIROS release tells a different story. Anthropic is building toward AI that doesn't wait to be asked. And for better or worse, their source code confirms that they're much further along that road than anyone outside the company knew.
Anthropic is shipping faster than any other period in the company's history. Bookmark this page — we'll update it as Claude Sonnet 4.8 drops, KAIROS goes live, and Claude 5 gets closer.