A quick word from me
This issue isn't sponsored - I write these deep dives in my free time and keep them free for everyone. If your company sells AI tools, dev tools, courses, or services that .NET developers would actually use, sponsoring an issue is the most direct way to reach them.
Want to reach thousands of .NET developers? Sponsor TheCodeMan →Every few years a new architecture style becomes "the right way to build software."
In 2014 it was microservices. In 2019 it was service mesh. In 2022 it was serverless. In 2025 it became fashionable to admit, in public, that maybe the modular monolith was the right answer all along.
Meanwhile, in the real world, the same story keeps repeating in .NET shops:
This article is the playbook I wish more teams had before that Monday meeting.
We will walk through the real evolution path I have seen work for .NET systems serving up to and beyond 100,000 users:
monolith → modular monolith → selective microservices
The thesis is simple and unfashionable: architecture evolution should follow measurable pain, not hype. I will repeat that line throughout the article, because in five years of architecture reviews, almost every failure I have seen came from violating it.
Let me tell you about a real-shaped story. I will call the company "Northwind Pay" - not a real client, but a composite of several .NET teams I have worked with.
5,000 users. Northwind Pay is a B2B billing platform built as a single ASP.NET Core 8 app. One Postgres database, one Redis, one background worker. Two engineers. Deployments take 4 minutes via GitHub Actions. Nobody complains.
30,000 users. The team is now eight engineers. Features ship weekly. The release pipeline has grown to 22 minutes because integration tests now hit twelve subsystems. Merge conflicts in Startup.cs and IServiceCollection registrations are a daily ritual. Two engineers quietly start refactoring shared services into "modules" without telling anyone.
60,000 users. A senior hire from a FAANG company joins. He says, with confidence, "this needs to be microservices." Nobody disagrees, because nobody wants to look junior. A six-month decomposition begins.
100,000 users. They now have eleven services, four databases, RabbitMQ, a half-finished Kubernetes setup, and a distributed tracing tool that nobody has fully configured. P99 latency has gotten worse. Two-thirds of incidents are now caused by network issues between services that did not exist a year ago. The CTO asks the architect, in a 1:1, "did we actually need this?"
The pivot. The team consolidates. They merge nine of the eleven services back into a single, well-structured modular monolith. Two services - the ones with genuinely independent scaling profiles (a PDF generator and a webhook delivery worker) - stay separate. Latency drops. Incidents drop. The team ships features again.
This is not an anti-microservices story. It is a story about sequencing. The team eventually arrived at microservices for the parts that needed them. They just took an expensive detour to get there.
Microservices are not, primarily, a code architecture. They are an organizational and operational architecture that happens to express itself in code.
When a microservices migration fails, it almost never fails because someone wrote a bad HttpClient call. It fails because of one or more of these:
The deepest failure is almost always #6. A wrong boundary inside a monolith costs you a refactor. A wrong boundary across services costs you a quarter.
Before we discuss solutions, let's get specific about what breaks, in what order, in a typical .NET monolith. This matters because the right architecture move depends on which pain you are actually feeling.
In my experience, the order looks like this:
Notice that the first two are not architecture problems. They are code and design problems. If you migrate to microservices to fix problem #1, you will be deeply disappointed - and significantly poorer.
Architecture evolution should follow measurable pain, not hype.
Let's define our terms.
A monolith in the .NET sense is a single deployable artifact - typically one ASP.NET Core process - that contains all the application's business logic, talking to one primary database.

This boring diagram has scaled Stack Overflow, Shopify (for years), Basecamp, GitHub (for most of its history), and a long list of companies you use every day.
Why monoliths still win for most teams:
cd .., not a git clone.The honest truth is that most .NET applications will never need anything more than a well-structured monolith. If you are at 5k–50k users with a small team, your job is not to architect for the future. Your job is to keep the monolith healthy: fix slow queries, push slow work to background services, and keep the module boundaries clean.
When the monolith starts to hurt - and you have ruled out "this is just a slow query" - the next step is almost never microservices. It is a modular monolith.
A modular monolith is still one deployable. But internally, it is structured as a set of strictly bounded modules, each with:

Why this is the underrated step:
That last point is the entire reason this step exists. You cannot know in advance which boundaries are real and which are imaginary. You can only learn it by living with them.
Let's get concrete. Here is the folder structure I use as a starting point for any new .NET system that I expect to grow.
src/ NorthwindPay.Api/ // Composition root, hosts modules Program.cs appsettings.json Modules/ Orders/ NorthwindPay.Orders/ // Internal implementation Domain/ Application/ Infrastructure/ OrdersModule.cs // Registration extension NorthwindPay.Orders.Contracts/ // Public contracts + integration events Billing/ NorthwindPay.Billing/ NorthwindPay.Billing.Contracts/ Identity/ NorthwindPay.Identity/ NorthwindPay.Identity.Contracts/ Notifications/ NorthwindPay.Notifications/ NorthwindPay.Notifications.Contracts/ BuildingBlocks/ NorthwindPay.SharedKernel/ // Result types, Guard clauses, base types NorthwindPay.EventBus/ // In-process bus + outbox abstractions
Two rules I enforce ruthlessly:
NorthwindPay.Api) is the only place that references all modules. No module references another module's implementation project.*.Contracts projects. That means interfaces, DTOs, and integration events - never EF Core entities, never internal services.This is not folder cosplay. It is a compile-time enforcement of boundaries. If a developer in the Orders module tries to call a Billing service directly, it does not compile. That is the strongest architecture rule you can have, because it cannot be argued with in a code review.
Each module exposes one extension method, and Program.cs becomes a list of those:
// NorthwindPay.Orders/OrdersModule.csnamespace NorthwindPay.Orders; public static class OrdersModule{ public static IServiceCollection AddOrdersModule( this IServiceCollection services, IConfiguration configuration) { services.AddDbContext<OrdersDbContext>(opt => opt.UseNpgsql( configuration.GetConnectionString("Orders"), npg => npg.MigrationsHistoryTable("__EFMigrationsHistory", "orders"))); services.AddScoped<IOrderService, OrderService>(); services.AddScoped<IOrderRepository, OrderRepository>(); // Register MediatR handlers from THIS assembly only services.AddMediatR(cfg => cfg.RegisterServicesFromAssemblyContaining<OrdersModule>()); return services; } public static IEndpointRouteBuilder MapOrdersEndpoints( this IEndpointRouteBuilder app) { var group = app.MapGroup("/api/orders").WithTags("Orders"); group.MapPost("/", CreateOrderEndpoint.Handle); group.MapGet("/{id:guid}", GetOrderEndpoint.Handle); return app; }}
And Program.cs:
var builder = WebApplication.CreateBuilder(args); builder.Services .AddIdentityModule(builder.Configuration) .AddOrdersModule(builder.Configuration) .AddBillingModule(builder.Configuration) .AddNotificationsModule(builder.Configuration); builder.Services.AddEventBus(builder.Configuration); var app = builder.Build(); app.MapIdentityEndpoints();app.MapOrdersEndpoints();app.MapBillingEndpoints();app.MapNotificationsEndpoints(); app.Run();
This is the most important file in the system. If it grows beyond ~30 lines, something is wrong with your modules.
When Orders needs something from Billing, it does not call BillingService directly. It depends on a contract:
// NorthwindPay.Billing.Contracts/IBillingApi.csnamespace NorthwindPay.Billing.Contracts; public interface IBillingApi{ Task<InvoiceSummary> CreateInvoiceAsync( CreateInvoiceCommand command, CancellationToken ct); Task<InvoiceSummary?> GetInvoiceAsync( Guid invoiceId, CancellationToken ct);} public sealed record CreateInvoiceCommand( Guid OrderId, Guid CustomerId, decimal Amount, string Currency); public sealed record InvoiceSummary( Guid InvoiceId, Guid OrderId, string Status, DateTimeOffset IssuedAt);
The Billing module implements IBillingApi internally. Orders only sees the contract. The day Billing becomes a service, you replace the in-process implementation with an HTTP/gRPC client that implements the same interface. Zero changes in Orders.
This is the single most important pattern in this article. Internalize it.
Direct calls between modules are fine for queries (give me invoice X) but a trap for workflows (an order was placed; now do five things). For workflows, use events.
// NorthwindPay.Orders.Contracts/Events/OrderPlaced.csnamespace NorthwindPay.Orders.Contracts.Events; public sealed record OrderPlaced( Guid OrderId, Guid CustomerId, decimal Amount, string Currency, DateTimeOffset OccurredAt) : IIntegrationEvent;
// NorthwindPay.Notifications/Handlers/OrderPlacedHandler.csinternal sealed class OrderPlacedHandler : IIntegrationEventHandler<OrderPlaced>{ private readonly IEmailSender _email; private readonly ICustomerLookup _customers; public OrderPlacedHandler(IEmailSender email, ICustomerLookup customers) { _email = email; _customers = customers; } public async Task HandleAsync(OrderPlaced @event, CancellationToken ct) { var customer = await _customers.GetAsync(@event.CustomerId, ct); await _email.SendAsync( to: customer.Email, subject: "We received your order", body: $"Order {@event.OrderId} for {@event.Amount} {@event.Currency} confirmed.", ct); }}
The Orders module knows nothing about Notifications. It publishes OrderPlaced and moves on. If Notifications is slow, broken, or temporarily disabled, Orders does not care.

The naive version of "publish an event after saving" is wrong. Consider:
await _db.SaveChangesAsync(ct);await _bus.PublishAsync(new OrderPlaced(...)); // process crashes here
The order is saved. The event is lost. Forever. This is one of the most common silent data-corruption bugs in distributed .NET systems.
The fix is the outbox pattern: persist the event in the same database transaction as the state change, then have a background process publish it.
// In Orders modulepublic sealed class CreateOrderHandler : IRequestHandler<CreateOrderCommand, Result<Guid>>{ private readonly OrdersDbContext _db; public async Task<Result<Guid>> Handle( CreateOrderCommand cmd, CancellationToken ct) { var order = Order.Place(cmd.CustomerId, cmd.Items); _db.Orders.Add(order); _db.OutboxMessages.Add(new OutboxMessage { Id = Guid.NewGuid(), OccurredOnUtc = DateTime.UtcNow, Type = nameof(OrderPlaced), Content = JsonSerializer.Serialize(new OrderPlaced( order.Id, order.CustomerId, order.Total, order.Currency, DateTimeOffset.UtcNow)) }); await _db.SaveChangesAsync(ct); // atomic: order + outbox row return order.Id; }}
And the background publisher:
public sealed class OutboxPublisher : BackgroundService{ private readonly IServiceScopeFactory _scopes; private readonly IEventBus _bus; private readonly ILogger<OutboxPublisher> _logger; protected override async Task ExecuteAsync(CancellationToken stoppingToken) { while (!stoppingToken.IsCancellationRequested) { try { using var scope = _scopes.CreateScope(); var db = scope.ServiceProvider.GetRequiredService<OrdersDbContext>(); var batch = await db.OutboxMessages .Where(m => m.ProcessedOnUtc == null) .OrderBy(m => m.OccurredOnUtc) .Take(100) .ToListAsync(stoppingToken); foreach (var message in batch) { var @event = Deserialize(message); await _bus.PublishAsync(@event, stoppingToken); message.ProcessedOnUtc = DateTime.UtcNow; } await db.SaveChangesAsync(stoppingToken); } catch (Exception ex) { _logger.LogError(ex, "Outbox publishing failed"); } await Task.Delay(TimeSpan.FromSeconds(1), stoppingToken); } }}

A few production lessons I have learned the hard way:
(ProcessedOnUtc, OccurredOnUtc) or you will be paged at 3 AM when the table grows to 50 million rows.Attempts column and a dead-letter table. One malformed event should not block all others.Long before you need microservices, you usually need to separate reads from writes. The classic .NET pattern:
// Write sidepublic sealed record PlaceOrderCommand(Guid CustomerId, IReadOnlyList<OrderLine> Lines) : IRequest<Result<Guid>>; // Read sidepublic sealed class OrderListQuery : IRequest<IReadOnlyList<OrderListItemDto>>{ public Guid CustomerId { get; init; }} internal sealed class OrderListQueryHandler : IRequestHandler<OrderListQuery, IReadOnlyList<OrderListItemDto>>{ private readonly NpgsqlDataSource _ds; public async Task<IReadOnlyList<OrderListItemDto>> Handle( OrderListQuery q, CancellationToken ct) { await using var conn = await _ds.OpenConnectionAsync(ct); var rows = await conn.QueryAsync<OrderListItemDto>( "select id, total, status, placed_at from orders.order_list_view where customer_id = @cid", new { cid = q.CustomerId }); return rows.ToList(); }}
Reads use Dapper or raw SQL against a view or a projection table. Writes use EF Core. This single change typically wipes out 60-80% of your "the API is slow" tickets, with zero microservices anywhere.
Now we get to the part everyone wants to skip to. Extraction is the easy part if you have done the modular monolith well. It is a nightmare if you have not.
A module is a candidate for extraction when at least two of the following are true:
If only one of these is true, you are extracting too early.

The extraction itself, when the module is well-bounded, looks like this:
IBillingApi to an HTTP client that implements IBillingApi.If step 5 is hard, you did not actually have a modular monolith. You had a monolith with folders.
The day you extract your first service, your operational surface area roughly doubles. Here is what shows up on the bill:

The honest framing: you do not get to do microservices part-time. You either invest in a platform team or you suffer.
The single biggest predictor of microservices success is whether each service truly owns its data.

If two services share a database, they are one service wearing a costume. Any schema change requires coordination. Any performance issue is shared. Any outage is shared. You have all the cost of microservices and none of the benefits.
In a modular monolith, you simulate data ownership with schemas (orders.*, billing.*, identity.*) and a strict rule: a module never queries another module's tables. Cross-module reads go through *.Contracts interfaces, which translate to either an in-process call today or an HTTP/gRPC call tomorrow.
If you cannot enforce that rule today inside one database, you will not magically enforce it across services tomorrow.
Before you extract a single service, your monolith should already have:
Serilog or Microsoft.Extensions.Logging writing JSON.Why before? Because the day you split the monolith, you lose the ability to step through a single stack trace. If you do not already have observability muscle, you will be debugging production by reading logs in three different systems and crying.
// Program.cs - minimum viable observabilitybuilder.Services.AddOpenTelemetry() .WithTracing(t => t .AddAspNetCoreInstrumentation() .AddHttpClientInstrumentation() .AddEntityFrameworkCoreInstrumentation() .AddSource("NorthwindPay.*") .AddOtlpExporter()) .WithMetrics(m => m .AddAspNetCoreInstrumentation() .AddRuntimeInstrumentation() .AddMeter("NorthwindPay.*") .AddOtlpExporter());
If your monolith does not have this, your next ticket should not be "extract Billing service." It should be "wire up OpenTelemetry."
Here is the roadmap I would give Northwind Pay - or any team in a similar position - in priority order. None of these are "do microservices."
Phase 0 - Stabilize the monolith (weeks 1-4)
BackgroundService workers.Phase 1 - Internal modularization (weeks 4-12)
*.Contracts projects.Phase 2 - Async backbone (weeks 8-16)
Phase 3 - Read/write separation (weeks 12-20)
Phase 4 - Selective extraction (months 6+)
The willingness to roll back is what separates engineering teams from cargo-cult teams.
After enough architecture reviews, the same mistakes show up. Here are the worst.
The Distributed Monolith. Multiple services, one shared database, synchronous HTTP calls in every request path. All the cost of microservices, none of the benefits. This is the most common failure mode.
The Auth Service Trap. Extracting authentication first because "every service needs it" - and then every request in the system has an extra network hop on the critical path. Auth should be a library plus an identity provider (Keycloak, Auth0, Entra), not a synchronous service in your hot path.
The Microservice Per Entity. "Order service", "Customer service", "Product service" - each owning one table. You did not extract domains; you extracted tables. Every business workflow now spans three services.
The Event-Driven Soup. No documented contracts, no schema registry, no idempotency. Events fire, things happen, nobody knows in what order, and debugging requires reading three months of logs.
The "We'll Add Tracing Later" Service. Extracting services without observability. You will not add tracing later. You will add it during the postmortem.
The Premature Kafka. A team of six engineers running Kafka, ZooKeeper (or KRaft), Schema Registry, and Kafka Connect to move a thousand messages a day. RabbitMQ or Azure Service Bus would have been a 10-line config.
The God Module. A "Core" or "Common" module that 80% of other modules depend on. Congratulations, you reinvented the monolith inside the monolith.
Architecture evolution should follow measurable pain, not hype.
A useful mental model is to compare deployment shape before and after.

In a modular monolith, deployments are still simple: one artifact, one rollout. Module boundaries help the codebase, not the deployment. That is fine - that is exactly what most teams need.
In microservices, every deploy is a coordination problem. You need contract tests, expand/contract schema migrations, and the discipline to never break a downstream consumer. This is real work. Budget for it.
A modular monolith in .NET is a single ASP.NET Core deployable structured internally as independent modules, each with its own folder, internal types, dedicated database schema, and a small public contracts project. Modules communicate only through contracts and integration events, never through direct dependencies on each other's implementation code.
Move from monolith to microservices only when you have measurable, recurring pain that cannot be solved within a single deployable: independent scaling needs, independent release cadences across teams, different reliability requirements, or fully isolated data ownership for a specific module. If you cannot point to such pain, stay with a modular monolith.
For most .NET systems, it is the destination. Only modules that prove they need independent scaling, deployment, or ownership should be extracted. Many successful products run as modular monoliths indefinitely.
Extracting services before domain boundaries are stable and before operational capabilities (observability, CI/CD per service, on-call rotations, async messaging) exist. The result is a distributed monolith - the worst of both worlds.
No, but you need some orchestration. App Service, Azure Container Apps, AWS ECS, or Fly.io can run microservices without Kubernetes. Choose the simplest platform that gives you health checks, rolling deploys, and autoscaling.
Synchronous queries go through interfaces in *.Contracts projects, implemented in-process today and replaceable with HTTP/gRPC clients later. Workflows go through integration events published via an in-process bus, persisted with the outbox pattern.
MediatR (or any in-process mediator) is useful inside a module, for command/query separation. It is a poor fit for between modules - that is what contract interfaces and integration events are for.
Use project references plus architecture tests. Tools like NetArchTest.Rules or ArchUnitNET let you assert in unit tests that "module X does not reference module Y's implementation namespace." If a developer breaks the rule, the build fails.
Easily. A well-tuned ASP.NET Core monolith on modern hardware can handle hundreds of thousands of users. The bottleneck is almost always the database, not the application tier.
The moment you publish an event after persisting state - whether to an in-process bus, RabbitMQ, or Kafka. Without an outbox, you will eventually lose events when a process crashes between save and publish. It is one of the cheapest insurance policies in distributed systems.
The single most expensive architecture mistake I see is teams skipping the modular monolith.
They go from a tangled monolith straight to microservices, learn that they did not actually understand their domain, and spend a year paying for that lesson in pager duty. The teams that do well take the boring path:
That sequencing is not glamorous. It will not get you on a conference stage. But it will get you to 100k users with a team that still ships features and sleeps through the night.
Architecture evolution should follow measurable pain, not hype.
If there is one sentence I want you to take from this article, that is the one.
If you want to keep going deeper on the patterns that make this evolution work in real .NET systems - outbox, CQRS, modular boundaries, integration events, and the trade-offs behind each - I cover them with production-grade examples in Design Patterns that Deliver. Use code DEEP20 for 20% off.
And if you found this useful, the easiest way to get the next deep dive is to join the newsletter - one practical .NET architecture article per week, no fluff, no reposted Twitter threads.
Until next time - keep the boundaries clean and the deployments boring.
Stop arguing about code style. In this course you get a production-proven setup with analyzers, CI quality gates, and architecture tests — the exact system I use in real projects. Join here.
Not sure yet? Grab the free Starter Kit — a drop-in setup with the essentials from Module 01.
Design Patterns that Deliver — Solve real problems with 5 battle-tested patterns (Builder, Decorator, Strategy, Adapter, Mediator) using practical, real-world examples. Trusted by 650+ developers.
Just getting started? Design Patterns Simplified covers 10 essential patterns in a beginner-friendly, 30-page guide for just $9.95.
Every Monday morning, I share 1 actionable tip on C#, .NET & Architecture that you can use right away. Join here.
Join 20,000+ subscribers who mass-improve their .NET skills with actionable tips on C#, Software Architecture & Best Practices.