The problem with building a capable AI assistant is not the AI. It’s the interface.
WhatsApp works brilliantly for a quick capture mid-commute. It handles async, mobile-first interaction well. But it renders markdown as plain text. It has no panels. You can’t have a voice conversation with it while a KB browser sits open beside the transcript. And it has absolutely no awareness of what directory you’re in when you’re three hours into a debugging session.
Different situations need different surfaces. But the intelligence — the memory, the context, the knowledge base — should not be split across those surfaces. You don’t want three mediocre brains. You want one excellent brain with three mouths.
That’s the architecture I’m building: NanoClaw, Kleos, and Nyx.
NanoClaw: the platform
NanoClaw is the backend. It’s an open-source Node.js orchestrator that receives messages from channels (WhatsApp, Telegram, Slack, Discord, Gmail) and routes them to Claude agents running in isolated Docker containers. Each agent group has its own filesystem and memory. The main agent — Lyra — has an Obsidian knowledge base mounted read-write into every container it runs.
Channel message arrives
└── Orchestrator picks it up
└── Spawns container (isolated filesystem + mounts)
└── Claude Agent SDK reads CLAUDE.md, accesses KB
└── Response routed back out
The knowledge base is a PARA-structured Obsidian vault: projects, areas, resources, journal, showcase. Every conversation Lyra has goes through that context. Progress notes, decisions, incidents — they land in the vault. The next conversation starts with that history already in scope.
NanoClaw is the brain and the memory. The channels are just delivery pipes.
Kleos: the voice pivot
Kleos started as something different. A voice-first PWA for capturing invisible senior engineer work — the incidents you solved, the architecture decisions you made, the mentoring conversations nobody logs. React 19, Vite 7, WebSpeech API, local Ollama handling categorisation. Epic 1 shipped. The voice capture worked.
Then I looked at what it was actually doing and the redundancy was obvious.
Lyra already categorises, tags, and routes notes into the correct KB location — with full project context and knowledge base access behind every call. The local Ollama model doing isolated categorisation was a weaker version of something that already existed. I was running two AI layers where one was strictly better than the other.
The pivot decision was clean: strip the Ollama backend, wire Kleos into NanoClaw as the official web client. Keep everything that was good — the voice infrastructure, the PWA shell, the shadcn/ui design system, the offline capability. Replace what was wrong.
The result is what Kleos should have been from the beginning: a voice-first dashboard that lets you have a natural conversation with Lyra while a KB browser panel sits beside the transcript. Streaming Claude responses with full markdown rendering. Project status, active tasks, recent journal entries — all surfaced from the same vault that every other interaction writes to.
Don’t compete with what the platform already does. Redirect the good infrastructure.
Nyx: the anti-pattern decision
Nyx is a GPU-accelerated Rust terminal. The developer’s preferred interface — fast rendering, built for people who spend their working life in a terminal window.
The obvious tempting design is: give Lyra a terminal executor. Let the AI run commands on your behalf. Autonomous execution, right there in your shell.
I ruled that out explicitly and documented why. Claude Code already exists. It does agentic terminal execution extremely well, built by a team with resources I don’t have, iterated on faster than I can maintain a reinvented version. NanoClaw already provides remote Claude Code sessions as a built-in capability. Building the same thing again in Nyx is maintenance burden without differentiated value.
What Nyx actually does is different and more interesting. When you cd into a project directory, Nyx detects it and loads the relevant KB context automatically. You can run lyra log "resolved the DPAPI session issue" and it writes a progress note to the correct project in the vault, with the timestamp and context already filled in. When a known error pattern appears in command output, the relevant runbook surfaces in a sidebar panel. Active tasks for the current project live in the status bar.
Nyx is not an executor. It’s a developer-context-aware surface for the same brain. The terminal knows where you are. That context — current directory, active project, git branch — makes Lyra’s responses sharper in ways a WhatsApp message never can.
The Lyra integration is also explicitly one optional module, not Nyx’s core identity. Nyx needs to work brilliantly as a local AI-powered terminal without any network dependency. When Lyra is available, it gets better. That’s the right relationship between a client and a platform.
The shared layer
The knowledge base is what ties this together.
One Obsidian vault, mounted into every NanoClaw container. Kleos writes a voice note → it lands in the vault. A WhatsApp message captures a decision → it lands in the vault. Nyx logs a debugging session → same vault. The next session on any surface picks up where any other surface left off.
The brain doesn’t care which mouth is talking. The knowledge accumulates regardless of interface.
This is also the open source story. NanoClaw is the agent platform — the reference implementation for anyone who wants to run their own containerised Claude assistant with a persistent knowledge layer. Kleos is the reference web client, demonstrating how to build a voice dashboard on top of NanoClaw’s API. Nyx is the reference terminal client, demonstrating how to wire developer context into the same backend. Three separate repositories. One coherent system.
The principle
The interface is not the intelligence.
A single great chat interface with mediocre AI does less than three average interfaces sharing one excellent brain. The tendency in this space is to over-invest in the surface — to build a beautiful product experience around a model that isn’t actually that capable in the specific context you need. The better investment is the other direction: make the brain excellent, make the memory deep, then build whatever surfaces you actually need for the situations you actually find yourself in.
WhatsApp for async mobile. Kleos for voice conversation with rich rendering. Nyx for dev sessions where terminal context is the most relevant thing about where you are.
Same brain. Same knowledge. Different mouth for different moments.
The views expressed here are my own. Examples and scenarios are composites drawn from broad industry experience and do not represent any specific organization, product, or system.