Nobody builds a twenty-year-old monolith on purpose. You build a reasonable application, ship it, and then two decades of features, team rotations, and regulatory changes happen. What you end up with is a system that works — reliably, in production, under load — but one that nobody fully understands anymore.
If you maintain one of these systems, you recognize the profile. Hundreds of projects in a single solution. Multiple interconnected applications sharing a single database. Business logic buried in stored procedures that nobody wrote tests for. Audit trails implemented through database triggers with non-trivial performance overhead. A custom schema migration tool that rebuilds everything on every deployment regardless of what changed.
The system works. That’s both the achievement and the trap. “Works” is not the same as “built for what comes next.”
This post is not a migration plan. It’s the architectural north star — the end state worth holding in mind when you’re making tactical decisions inside the constraints of a legacy enterprise codebase. Not what you build tomorrow. What you’re building toward.
The domain boundaries already exist
A monolith like this knows everything about everything. Every functional area — compliance, operations, scheduling, inventory, reporting — woven through the same codebase, the same database, the same deployment pipeline. Change one domain and you risk breaking all of them.
But if you listen to how the business talks about the system, the boundaries are already there. Different departments own different capabilities. Different regulations govern different concerns. These aren’t arbitrary technical divisions — they’re how the business itself is organized.
The vision starts with Domain-Driven Design — not as a philosophy exercise, but as a practical decomposition strategy. Each bounded context becomes an independently deployable microservice with its own database, its own deployment lifecycle, and eventually its own team. An operator creating a task in one domain shouldn’t need to worry about — or risk breaking — the scheduling system in another.
The decomposition isn’t just technical. It’s organizational. When a domain owns its own service, the team responsible for that domain can deploy, scale, and iterate independently. The blast radius of any change shrinks from “the entire platform” to “one bounded context.”
The facade that makes migration safe
You cannot rewrite a twenty-year-old system in one go. The feature catalog alone is unknowable without months of discovery work. Undocumented business rules live in stored procedures that nobody wrote tests for because nobody expected them to still be running two decades later.
The Strangler Fig pattern is the answer, and an API Gateway is what makes it surgical.
The gateway sits between clients and the system. On day one, it routes everything to the monolith. As microservices come online — starting with the cleanest domain boundaries — the gateway reroutes specific requests to the new service. The monolith never notices. The clients never notice. The risk is contained to one domain at a time.
The gateway handles more than routing. Authentication, rate limiting, request transformation, response aggregation, monitoring — all the cross-cutting concerns that individual services shouldn’t carry. It’s the seam between old and new, and it’s designed to be temporary. As the monolith shrinks, the gateway’s routing table shifts. Eventually, there’s nothing left to strangle.
The critical discipline: every migration step must be reversible. Every new service proves itself in production before the old code path is deprecated. You don’t decommission the monolith piece by piece — you earn the right to, one domain at a time.
Events over shared state
In a monolith, components communicate through shared database tables. Module A writes a row, Module B reads it. They’re coupled through data rather than explicit contracts. Nobody can map the full dependency graph because it lives in SQL queries scattered across thousands of files.
The target architecture inverts this. Apache Kafka becomes the backbone — an event bus where microservices publish what happened and subscribe to what they care about.
Consider what this looks like in practice. An operator creates a new work order in a regulated enterprise system:
- The Operations service writes the work order to its own database and, in the same transaction, writes a “WorkOrderCreated” event to an outbox table
- A background process — or Debezium, doing change data capture against the outbox — picks up the event and publishes it to Kafka
- The Compliance service consumes the event and runs impact checks. Does this work order trigger any regulatory holds?
- If a compliance hold is required, it emits a “ComplianceHoldTriggered” event
- The Scheduling service picks that up and adjusts timelines for affected resources
- Notifications fire downstream to the relevant teams
Each service reacts to events and emits its own. No service calls another directly. No shared database tables. If the Scheduling service is temporarily unavailable, the saga orchestrator triggers compensating actions — revert the compliance check, roll back to a known state, retry when the service recovers.
The Outbox pattern is what makes this reliable. The database write and the event publication happen in a single local transaction. You never end up in a state where the work order was created but the event was lost, or the event was published but the work order wasn’t saved. Debezium watches the outbox table via change data capture and streams unprocessed events to Kafka, ensuring eventual consistency without polling.
This is choreography-based saga orchestration — long-running transactions that span multiple services without requiring them to know about each other’s internals. Loose coupling that actually holds up under failure.
One database, one truth — per service
In a legacy monolith, the single biggest coupling surface is often the shared database. Every application, every feature, every domain — one database instance with hundreds of scalar functions degrading query plans, cross-domain joins nobody dares refactor, and a schema carrying the archaeology of decades of business decisions.
The vision is database-per-service. Operations owns its schema. Compliance owns its own. No shared tables. No cross-service joins. A schema migration in one domain cannot bring down another.
This introduces a genuine challenge: complex reads that currently rely on cross-domain joins. The answer is CQRS — Command Query Responsibility Segregation. Writes go to each service’s own database through its own models. Reads are served from materialized views that aggregate data across domains, kept current by consuming events from Kafka. A Redis caching layer handles hot-path queries. For extreme scale, database replication for read-heavy workloads and sharding for horizontal growth.
Beyond isolation, database-per-service is the path out of vendor lock-in. Legacy .NET systems are often deeply coupled to SQL Server — not just the engine, but stored procedures carrying business logic, database-scheduled agent jobs, trigger-based audit trails, and proprietary migration tooling. Entity Framework Core is the bridge. Map the schema incrementally. Move business logic from stored procedures into the application layer where it can be unit tested, versioned, and debugged with modern tooling. Replace brute-force migration tools with EF Core’s incremental migrations — targeted changes, versioned, no unnecessary re-runs.
Trigger-based audit trails deserve special attention. SQL triggers are implemented differently across database vendors, making them a hard vendor lock. Three alternatives are worth exploring:
- Change Data Capture — database-level change logging, vendor-provided but more portable than custom triggers. The most pragmatic first step.
- Event Sourcing — store domain events as the source of truth, rebuild state at any point in time. Architecturally pure, but adds significant application-level complexity.
- Structured audit logging — application-level audit events shipped to a purpose-built store via frameworks like Serilog with an ELK backend.
Each has trade-offs. But all three are more portable than triggers wired to a specific database vendor.
Seeing everything
In many legacy enterprise systems, the monitoring story is fragmented. Some applications write to log files. Others rely on Windows Event Viewer. Background process failures can go unnoticed for days. When something breaks, the first question is often “where do I even look?”
Unified observability changes that equation. The vision is an ELK stack — Elasticsearch, Logstash, Kibana — with Beats agents collecting from every surface:
- Filebeat on every server generating log files
- Winlogbeat on Windows servers monitoring Event Viewer
- Logstash parsing and enriching logs with environment and application metadata
- Elasticsearch indexing everything into a searchable, time-series store
- Kibana turning raw data into dashboards and real-time visibility
- ElastAlert triggering notifications on error rate spikes, service degradation, and silent failures
The goal isn’t debugging. It’s confidence. When you can see every service, every background job, every failure mode — you deploy with conviction instead of anxiety. You catch silent failures in minutes instead of days. You see resource exhaustion building before it takes systems offline.
Observability is the foundation that makes everything else in this vision possible. You can’t safely strangle a monolith if you can’t see what’s happening on both sides of the migration.
The north star, not the roadmap
A vision like this is not a plan. There’s no timeline, no budget, no approval chain required for it to be valuable. It’s the picture that gives tactical decisions a direction.
If you’re an architect inside a legacy enterprise system, you face constraint-driven decisions every day. The team can’t absorb overnight disruption. The business needs features now, not infrastructure. Regulatory environments punish failure. In that context, having a north star — even one you can’t reach this quarter — changes how you evaluate every fork in the road. Without it, every tactical decision is a coin flip. With it, every tactical decision is a step — maybe small, maybe sideways — toward something coherent.
Some elements of a vision like this are achievable incrementally. Moving background jobs from database-scheduled agents to application-level schedulers. Adopting an ORM for new development paths while leaving legacy data access untouched. Centralizing documentation and tooling across previously siloed repositories. These are steps that don’t require anyone to approve the full decomposition.
The full target — true microservices, event-driven communication, database-per-service — is a horizon goal. And that’s fine. A system that has survived twenty years didn’t get there by rushing. The next twenty years don’t need to rush either. What they need is a direction — clear enough that when an opportunity opens, when a new feature could be built as a service instead of another module bolted onto the monolith, you know which way to walk.
The system has survived by being good enough. The north star is about what comes after good enough.
The views expressed here are my own. Examples and scenarios are composites drawn from broad industry experience and do not represent any specific organization, product, or system.