Someone commented on one of my posts: “This has LLM smell all over it.”
They were right.
The sentences are clean. The structure is deliberate. There are no rough edges that a human writer usually leaves — the kind that come from not quite knowing what you want to say until you’ve said it three times.
AI wrote those sentences. I’m not going to pretend otherwise.
But the comment assumed something I want to push back on. It assumed that “AI wrote the sentences” means “AI generated the knowledge.” That the ideas, the patterns, the decisions, the hard-won lessons — all of it came from a model that read everything on the internet and learned to sound like a software architect.
That’s not what happened.
What actually happened is this:
At 11:42pm on a Tuesday, I sent a WhatsApp message to myself: “Integration platform — don’t build it for the current clients, build it for the ones you’ll have in 18 months. Everything else is rework.”
That message became a journal entry. The journal entry fed into a decision note. The decision note shaped a blog post. The blog post became a signal on my site.
That kind of message, sent at odd hours, from wherever I happen to be. Decisions made under pressure. Patterns noticed after the third incident. A thought in the shower that turned out to be the clearest thing I’d said all week.
The AI didn’t generate any of that. It routed it, structured it, and eventually — yes — helped write the sentences that expressed it.
That’s a different thing.
The commenter was asking the right question. Who wrote this? is exactly the question that matters. But it has two layers:
Who wrote the sentences? Claude did. I gave it my thinking and it gave me prose.
Who wrote the knowledge? I did. Sixteen years of systems, incidents, migrations, decisions, failures, and patterns. None of that came from a model. It came from doing the work.
The distinction matters because there are two very different ways to use AI for content:
Mode 1: Give the AI a topic and let it synthesise from the internet. The sentences will be clean. The knowledge will be generic. This has LLM smell because it IS LLM — the model is the source, not just the writer.
Mode 2: Build a system that captures your organic thinking continuously, structures it, and feeds it to the AI as the source material. The sentences will still be clean. The knowledge will be yours.
I built Mode 2. This series is about how.
Over the next six posts I’m going to show the system from the inside:
- The capture layer — how a WhatsApp message becomes a KB note at midnight
- The structure — why PARA, why Obsidian, what I tried first that didn’t work
- The AI pipe — what Lyra actually does (and what she doesn’t)
- The output layer — how KB content becomes site signals
- The numbers — three months of this running live
- The Breed — what this era actually unlocks for people who spent years accumulating expertise before anyone was watching
If you’ve ever wondered whether AI writing can be authentic — not just whether the AI is “honest,” but whether the knowledge underneath is real — this is for you.
The LLM smell is real. The knowledge underneath it is mine.
This is the first signal in the constellation The Second Brain That Publishes Itself.