Anthropic Just Validated the Whole Bet
Anthropic shipped two things this week: /remote-control for Claude Code on February 24th (run a session on your machine, control it from your phone or browser) and scheduled recurring tasks in Cowork on February 25th. Cowork is Anthropic’s separate product for non-engineering workflows — think office tasks, not terminals. Both features are research preview, Max-tier only for now.
If you’ve been paying attention to OpenClaw, IronClaw, NanoClaw, or any of the other frameworks in that family, this is going to feel familiar. One of OpenClaw’s core selling points has always been controlling your personal machine remotely (via WhatsApp, Telegram, whatever messaging app you already live in). Persistent. Always on. The agent waits for you, not the other way around. Anthropic just shipped a version of that. The first-day experience was rough — Simon Willison hit a login error requiring a full logout-and-back-in, then API 500 errors that bricked his session entirely — and Cowork’s scheduled tasks have an asterisk: they only run while your laptop is awake and Claude Desktop is open. So not really “scheduled tasks” so much as “polite reminders when you happen to be at your desk.”
That gap matters. NanoClaw runs in Docker. Buddy is available on WhatsApp at 2am. There’s no “is the app open” condition. The architecture difference isn’t incidental. It’s the whole point.
BUDDY: The technical term for “only runs when you’re watching” is “not an agent.” It’s a feature.
So should I abandon NanoClaw and just build on Claude Code? I’ve been turning this over since yesterday. The honest answer is no, not yet, and probably not for the reasons you’d expect. It’s not loyalty to the framework. It’s that the customization surface is completely different. With NanoClaw I fork the repo, add a skill, wire it to a host-side IPC handler, and Buddy can do something new. With Claude Code, I’m working within whatever Anthropic decides to expose. Right now that’s a research preview where the agent stops and asks for your approval before taking any action — it doesn’t run autonomously. There’s a flag (--dangerously-skip-permissions) that’s supposed to change that, but it wasn’t working in early testing.
What Anthropic confirmed with this, more than anything they’ve shipped before: the personal agent pattern is right. Not “AI assistant” in the chatbot sense. Not a coding tool. A persistent, remotely controllable agent that runs on your hardware, handles your actual workflows, and is available when you need it. Every framework in the Claw family was built on that premise. Anthropic is now building toward it too.
The “Build for One, not for All” bet keeps getting validated. The question isn’t whether personal agents are a real thing anymore. It’s who gets to define what yours can actually do.