The Conway Leak: Anthropic's Play for the Persistent Agent Layer
Here we go .. Again
The most consequential revelation from the recent Claude Code source leak wasn't a security vulnerability or a licensing oversight. It was Conway — an unannounced, always-on agent environment buried inside half a million lines of code that Anthropic accidentally pushed to a public registry.
While the internet fixated on takedown notices and exposed source, Conway quietly revealed where the entire AI industry is heading: the race to own how you work, not just what you ask.
What Conway Actually Is
Conway isn't a chatbot with better memory. It's a standalone agent environment — a persistent sidebar within the Claude interface that operates across three core areas: search, chat, and system.
The system layer is where things get serious. It includes an extensions directory (think: app store for agent capabilities), a connectors panel showing which services are plugged in, and — critically — automatic triggers. These are public endpoints that external services can ping to wake Conway up. You toggle which services get that privilege.
This means Conway doesn't wait for you. It runs in the background, watching signals across your tools, learning patterns over time, and acting on them.
A Tuesday Morning, Six Months In
The real implications don't land until you imagine daily life after months of compounding context.
You wake up. Conway has been running overnight. It flagged three emails that match patterns it learned matter to you — not from rules you wrote, but from six months of watching you work. Two had draft responses ready. The third, from your VP, was flagged but untouched because Conway knows that one requires your judgment.
It scanned Slack, found an architecture question in engineering, pulled context from a design doc you reviewed last month, and drafted a reply sitting in queue. Your 10 AM board prep? Conway already pulled the latest dashboard numbers.
You haven't typed a word yet.
Now — about a third of that overnight work might be wrong. The email draft might misread tone. The Slack reply might be technically off. But the net value is still positive because the agent is fast enough that catching its mistakes costs less than doing everything from scratch. The value proposition isn't perfection. It's speed multiplied by iteration.
The 90-Day Platform Strategy
Conway doesn't make sense in isolation. It's the capstone of a coordinated platform play Anthropic executed across a single quarter:
Claude Code Channels let you message Claude Code through Discord and Telegram, neutralizing the core appeal of third-party tools like OpenClaw. Cowork targeted the 95% of enterprise employees who aren't engineers, reportedly outpacing Claude Code's early adoption. Claude Marketplace created an enterprise procurement layer where partner apps (GitLab, Harvey, Snowflake) could be purchased through Anthropic's billing. And a $100M partner network commitment pulled in Accenture (training 30,000 professionals), Deloitte, Cognizant, and Infosys as anchor partners.
Then came the enforcement: Anthropic blocked third-party tools from Claude subscriptions. If you want to use Claude through anything Anthropic didn't build, your per-use rates could run 10–50x higher than subscription pricing.
Add Conway on top and you're not looking at five separate product decisions. You're looking at a single platform strategy executed across five surfaces: a developer tool, an enterprise tool, an always-on agent, a distribution layer, and an enforcement mechanism.
The Microsoft Parallel
If you're old enough to remember the 1990s, this arc feels familiar. Microsoft went from selling an operating system (DOS) to owning the desktop (Windows) to controlling the application layer (Office) to locking in the enterprise (Active Directory and Exchange). Each step was an individual product. Together, they were a strategy that moved Microsoft from OS vendor to the company that owns how businesses compute.
That took about 15 years. Anthropic is attempting the same arc — model provider to developer tool to enterprise platform to agent operating system — in 15 months.
Conway is the Active Directory play. It's the piece that makes everything else sticky because a persistent agent that knows your organization creates switching costs unlike anything before it.
The MCP Paradox
Here's the tension at the heart of the strategy. Anthropic published the Model Context Protocol as an open standard. OpenAI adopted it. Google adopted it. The Linux Foundation hosts it. MCP was supposed to be the universal connector between AI tools and data sources.
Conway uses MCP — but adds a proprietary layer on top. Its .cnw.zip extension format includes custom interface panels, information handlers, and tools that work specifically inside Conway's environment. They're not portable. They're Conway-only.
This is the Google Play Services pattern. Android is open source. Anyone can use it. But the commercially valuable layer — Maps, payments, push notifications, the Play Store — is proprietary. You can technically build an Android phone without Google. In practice, nobody does.
For developers, this creates the same fork in the road Apple created in 2008. Build a standard MCP tool that's portable but has no distribution mechanism. Or build a Conway extension that only works inside Conway but ships with a built-in app store, discoverable by millions of Claude subscribers. We know how that played out with mobile. The open web still exists. It still matters. But the app store made all the money.
Behavioral Lock-In: A New Category
Every previous form of platform lock-in was about stuff. Microsoft locked you in by your files. Salesforce by your customer records. Slack by your communication history. Painful to migrate, but technically possible. Consultants specialize in it.
Conway locks in something fundamentally different: the accumulated model of how you work.
Not your files — the patterns the agent learned by watching you use them. Not your Slack messages — the understanding of which messages you respond to in five minutes and which you ignore for three days. Not your calendar — the knowledge that you always reschedule your 2 PM on Thursdays and meetings with your VP always run long.
There's no CSV export for "how this person thinks." There's no migration consultant for behavioral context. When you switch after six months, you don't just lose a tool — you lose six months of compounding intelligence. You're back to a brilliant stranger you have to explain everything to.
This is lock-in at a layer that hasn't existed before. Data portability has legal frameworks. Intelligence portability has nothing — no laws, no standards, no considered opinions. We haven't had to face it.
The Real Question for 2026
The first era of AI competition was about models — who has the best foundation model, the longest context window, the highest benchmark scores. That race isn't over, but the margins between frontier models have compressed. It's no longer the primary battleground.
The second era was about interfaces — Claude Code, Cursor, Windsurf, OpenClaw. Who owns the surface where people actually work.
The third era, now beginning, is about persistence. Who owns the always-on layer? The agent that doesn't just respond when prompted but stays running, accumulates context, wakes on events, and acts autonomously. The agent that knows you not because you told it something, but because it's been watching.
All three labs — Anthropic, OpenAI, and Google — have converged on the same insight. The model is the loss leader. The persistent agent layer is the money product. Whoever owns that layer gets lock-in we've never seen before, not because the model is better, but because the switching cost is unthinkable.
What This Means for You
If you're building agents for yourself, your team, or your clients, Conway's existence (even as a leak) changes your calculus. The core question: do you want your agent's memory to live inside a single provider's infrastructure?
Conway and its equivalents from OpenAI and Google will be convenient, polished, and ship with extension ecosystems from day one. But everything your agents learn — your workflows, your decisions, your institutional knowledge — lives inside their infrastructure.
The alternative is building or adopting a universal context layer where the memory is yours, exposed through open protocols any model can access. That's harder to set up. It requires intentional architecture. And for many companies and individuals, convenience will win anyway.
But here's the uncomfortable truth buried in Conway's implications: choosing your employer is increasingly going to mean choosing your persistent agent. Are you a Claude company or a ChatGPT company? That question will matter more than Windows vs. Mac ever did — because this time, the lock-in isn't your files. It's the compounding intelligence that makes you productive.
The policies around behavioral context portability need to ship before Conway does, not after.
Pick your fighter carefully.