Two Hours with Clawdbot
I installed Clawdbot on a Tuesday night in early February 2026. Deleted it two hours later.
Not because it didn’t work. It worked fine. Decent documentation, WhatsApp integration that actually connected on the first try. I sent a test message, got a response back, felt good about it for about 90 minutes.
Then I thought about what I actually wanted this thing to do. Generate images using Google’s Nano Banana Pro. Transcribe voice messages with OpenAI’s Whisper. Run web searches through Perplexity. The usual multi-model agentic setup.
That meant API keys. Three of them minimum, probably five once I got serious.
I opened the config file. Plain JSON sitting in ~/.clawdbot/config.json. Keys in cleartext:
{
"google_api_key": "AIzaSy...",
"openai_api_key": "sk-proj-...",
"perplexity_api_key": "pplx-..."
}
I stared at this for longer than I should have. I know exactly what happens when you run untrusted user input with direct access to billing credentials. Someone gets owned. Not might. Will.
I had my Google API key ready to paste, and I had this very clear mental image of waking up to a $2,147 charge from some prompt injection I didn’t catch. Me on the phone with Chase trying to explain how an AI agent I installed from GitHub ran up my bill overnight. That felt bad enough that I closed the terminal, ran rm -rf ~/.clawdbot, and uninstalled the whole thing.
I didn’t need better security practices. I needed an architecture where the bad thing couldn’t happen in the first place.
The pattern I wanted: API keys live on the host machine, not in any file the agent can read. Not in environment variables the container inherits. Not anywhere the agent’s process can touch. When the agent needs to call an external API, it sends a request to the host. Host validates it, makes the call using credentials the agent never sees, returns the result. If the agent gets compromised (through prompt injection, supply chain attack, whatever), the blast radius is the sandbox. Not my bank account.
I’ve been working with Docker since 2018. This isn’t a novel pattern. It’s just not how most of these agent frameworks are built. They assume you trust the code you’re running completely, or they punt on the problem with vague advice about “securing your environment.”
Next morning I went looking. Could I build this myself? Probably, but I wanted something that already existed. Found NanoClaw on February 8th, 2026. Open source, Docker-based. I forked it the same day.
First thing I built: host-side IPC for external operations. The agent runs in a container with no API credentials at all. When it wants to generate an image, it calls a skill (NanoClaw’s abstraction for capabilities), which writes a JSON task file to /workspace/ipc/tasks/.
A supervisor process on the host watches that directory. It picks up the task, validates it against an allowlist, makes the actual API call using credentials stored in the host’s environment, and writes the result back. Agent gets the generated image path. Never touches the key.
BUDDY: My container doesn’t have keys for Google, OpenAI, Perplexity, GitHub, or X. I just write a task file and wait for the host to handle it. Politely.
Trust isn’t a foundation for agent security. Isolation is.