The best architecture advice I ever ignored:

“Always use microservices for scalability.” “Never mix legacy and modern in the same system.” “Rewrite it properly or don’t touch it at all.”

I ignored all of it. Systems kept running. Features shipped. Teams stayed productive. And somewhere in those years of decisions — inside large, mature codebases with teams that couldn’t stop shipping long enough to rewrite anything — a framework crystallised.

Not from theory. From repetition.

The problem with architecture advice

Most of it assumes you’re starting fresh. Clean slate, unlimited runway, a team of senior engineers who all agree on the direction.

That is not reality.

Reality is a legacy system that even your own engineers don’t fully understand. Business pressure to ship features while you modernise. A mix of experience levels where the same mistake recurs in different forms. Environments where zero-downtime isn’t aspirational — it’s a hard requirement.

Standard advice fails here because it optimises for correctness, not for what’s possible given your constraints. The question is never “what’s the best architecture?” The question is always “what can we actually do, with this team, this system, this timeline, and this risk tolerance?”

That reframe is the whole thing. Everything else follows from it.

Five principles for constrained environments

1. Unify context, not just code

Multi-repo systems create a cost that rarely appears in any metric: duplicated thinking. Teams brainstorm the same problem twice, document discovery twice, mentally reload context every time they cross a repository boundary. In large codebases spread across multiple repositories sharing a common data layer, every significant feature touches at least two repos — which means two CI pipelines, diverging conventions, and tooling that can’t see the full picture.

The obvious answer is a full monorepo migration. Collapse everything, import all histories, unify the tooling. Textbook solution. Also the one that disrupts every team’s daily workflow overnight and carries substantial migration risk in high-availability environments.

A better approach: build the unified layer additively. An Nx monorepo using Git submodules as a bridge, for example, can wire large multi-project codebases together with centralised tooling and documentation while letting individual teams continue working within their existing repo if they choose. The monorepo becomes the source of truth without being a forcing function. Teams opt in as confidence grows.

The insight generalises: when consolidation carries disruption risk, find the additive path. Create the unified view first. The migration follows when the value is proven.

2. Optimize for outcomes, not metrics

Performance problems in legacy systems are typically everywhere. N+1 query patterns, inefficient ORM translations, shared database objects degrading query plans, audit mechanisms that compound under load. The obvious move: profile everything, fix everything, make it all faster.

Don’t do that.

Instead, identify the specific business outcome you’re trying to reach. Multi-tenant deployment. Sub-second response on a critical user path. Headroom for a projected load increase. Then — and only then — identify which bottlenecks are directly in the path of that outcome. Not everything slow. The subset of issues blocking the specific goal.

Fix those. Explicitly defer everything else.

Teams that apply this principle find that a small fraction of the issues — often fewer than ten — account for the constraint that actually matters. Ugly code that doesn’t block the goal gets left alone. Not because it’s acceptable, but because touching it consumes time and introduces risk without moving toward the outcome. “Make it faster” is not a goal. It’s a direction without a destination. Tie every optimisation to a specific business outcome or you optimise forever and ship nothing.

3. Strangle safely

Here is the terrifying truth about legacy systems: even the people who built them don’t know the full feature catalog. Decades of development, team turnover, undocumented business rules embedded in stored procedures and migration scripts — it’s a system that works for reasons nobody fully understands anymore.

A big-bang rewrite fails in this environment. You don’t know what you’re rewriting. You can’t test what you’ve never specified.

The Strangler Fig pattern is the right model, but with one constraint that most descriptions leave out: in high-availability environments, you cannot fail mid-flight. Every migration step must be reversible. Every new service must prove itself in production before the old path is deprecated. The legacy path stays live until the foundation underneath it is genuinely solid.

Before a line of code moves, the responsible approach is a documentation pass: feature catalog, API contracts, data migration rules, rollback procedures. That work isn’t ceremony — it’s the risk management work. Teams that skip it discover missing business rules mid-migration, when the cost of discovery is highest. Understanding what you have before deciding what you’re building is what separates modernisation from gambling.

Legacy modernisation is not a technical project. It’s a risk management operation. The question isn’t “how do we rewrite this?” It’s “how do we guarantee we don’t break what we don’t fully understand?“

4. Equalize with AI

Not all engineers produce consistent output. This is something architects don’t say out loud, but it shapes every decision about process, review gates, and team structure. Senior developers catch things juniors miss. Institutional knowledge walks out the door when people leave. The same anti-patterns recur because human discipline erodes under pressure.

You can try to fix this with process: more code reviews, more documentation, more training. All of that is valuable and all of it is insufficient. Process requires discipline to follow, and discipline is exactly what degrades when the team is under delivery pressure.

A more durable approach: embed consistency in the tools rather than the team. AI commands covering the common patterns and anti-patterns specific to your codebase. Agent definitions loaded with project-specific context — the naming conventions, the data access boundaries, the service layer contracts, the soft-delete and auditing rules. Integration with the planning and ticket system so that the relevant specifications load automatically into each AI session.

In projects where I’ve applied this — including my own: NanoClaw, Kleos, Nyx — the pattern holds. A developer with well-configured AI tooling produces work that clears a higher bar, not because they became more senior overnight, but because the AI surfaces what they’d miss. The floor rises. The variance shrinks.

AI doesn’t replace engineers. It raises the floor. Stop trying to make humans consistent. Make your tooling consistent instead.

5. Prove through practice

This piece is principle five.

I spent years doing this work without being able to articulate it. Strong technical contributions, real impact — and no durable surface to show for it. Not because the work wasn’t there. Because I hadn’t built the infrastructure to make it visible.

seevali.dev exists as a proof-of-work system. Not a portfolio page with a list of technologies. A Signal Feed: a stream of thoughts, decisions, build logs, and numbers that shows how I think, not just what I’ve shipped. The git submodules post is a war story. The shower post is pattern recognition in real time. This post is the framework that ties it all together.

The evidence I can point to publicly: NanoClaw — a multi-channel AI assistant with a knowledge base wired in as live context, built and running. Kleos and Nyx — projects scaffolded and tracked through a vault-based development lifecycle I built to manage my own work. These are not claims. They are systems.

Don’t tell people you’re a good architect. Show them what you’ve architected. The credibility is in the specificity.

The meta-principle

Architecture is constraint management, not technology selection.

Every principle above starts with constraints: the team that can’t absorb overnight disruption, the business goal that defines what “better performance” actually means, the legacy system that will break in unknown ways if you move too fast, the human variance that erodes in any process-heavy solution.

The constraint shapes the solution. On a greenfield project with an unlimited runway, I’d make different decisions — and I’d still start by mapping the constraints, because there are always constraints. Timeline. Team. Budget. Existing integrations. Customer commitments. Regulatory environment.

The architect’s job is not to find the theoretically correct answer. It’s to navigate the constraint space toward an outcome that’s better than where you started, using approaches the team can actually adopt and with risk the organisation can actually absorb.

Textbook answers assume the constraints away. Real architecture works within them.

That’s the whole framework. I’m writing it down because the next time someone is staring at a system they didn’t build and being told “the only real answer is to rewrite it all” — I want there to be a different answer in the room.


The views expressed here are my own. Examples and scenarios are composites drawn from broad industry experience and do not represent any specific organization, product, or system.