Microservices vs Monoliths: What Executives Need to Know About Software Architecture

In 2023, Tobi Lütke - founder and CEO of Shopify - sent an internal memo telling his engineering team to stop building microservices and consolidate back into what he called a "modular monolith." This was not a small company throwing in the towel on complexity. Shopify processes over $235 billion in annual commerce and employs thousands of engineers. Their conclusion: for most of their work, the simpler architecture was the right one.

Around the same time, the engineering world was reading post-mortems from companies that had spent years migrating to microservices - and had discovered the costs exceeded the benefits. Yet meanwhile, Netflix and Uber were running thousands of microservices as the foundation of products used by hundreds of millions of people daily.

When your CTO says "we need to break up the monolith," they are making a significant proposal that will consume engineering capacity for years and change how your company builds software. Here is what that actually means, and how to evaluate whether it makes sense for your business.


What a Monolith Actually Is

A monolithic application is software built as a single deployable unit. All the code - the user interface, the business logic, the database access layer, every feature - lives together in one codebase, runs on the same process, and is deployed as a single package.

When Basecamp (the project management tool) ships a software update, they update one application. When it has a bug, there is one place to look. When they want to test a change, they run one test suite.

This sounds simple because it is simple. A monolith is the natural way software starts. Every successful technology company - Google, Facebook, Twitter, Amazon, Shopify - started as a monolith. The architecture is not unsophisticated. It is the appropriate default for most of software's lifecycle.

The problems emerge at scale. When a monolith grows to millions of lines of code and hundreds of engineers making simultaneous changes, several things happen:

Deployment becomes dangerous. Every change requires deploying the entire application. A bug in a small, unrelated feature can take down the whole system. Teams become afraid to deploy.

Scaling becomes wasteful. If the checkout flow needs 10x more computing power during a sale, a monolith forces you to scale the entire application - including the blog, the admin panel, the email system - just to get more capacity where you actually need it.

Teams get in each other's way. When 50 teams are all working in the same codebase, coordination overhead grows quadratically. Code changes conflict. Release schedules collide.


What Microservices Actually Are

Microservices decompose that single application into many small, independently deployable services. Each service owns a specific business capability - payments, inventory, user authentication, notifications - and communicates with other services through APIs.

Amazon famously made this transition in the early 2000s, mandated by the same Jeff Bezos API memo that reshaped the whole company. Amazon's product catalog service knows about products. The shopping cart service knows about carts. The checkout service knows about orders. They talk to each other through defined interfaces. No service reaches directly into another service's database.

The benefits of this architecture at scale are real:

Independent deployment. The payments team can deploy a change to the payments service without waiting for the recommendations team to finish their sprint. Teams move at their own pace.

Targeted scaling. If the search service is under load, scale the search service. Leave everything else unchanged. This is significantly more cost-efficient at large scale.

Fault isolation. If the recommendation engine crashes, customers cannot see recommendations. They can still check out. A failure in one service does not cascade across the application.

Team autonomy. Each team owns their service end-to-end - the code, the database, the deployment pipeline. Coordination is through API contracts, not shared codebases.


The Costs That Rarely Make It Into the Pitch Deck

The business case for microservices usually looks compelling on a whiteboard. The slide deck shows independent deployments, isolated failures, and clean team ownership. What the slide deck often omits is the operational complexity that comes with running dozens or hundreds of independent services.

Distributed systems are hard. When your application is one process, function calls between components are instant and reliable. When your application is 50 services talking over a network, every call can fail, time out, or return partial data. You now have to handle network errors in every service, which adds code complexity and new failure modes that did not exist in the monolith.

Observability costs multiply. In a monolith, a transaction either succeeds or fails. In a microservices architecture, a failed customer checkout might have touched 12 services. Figuring out which one failed - and why - requires distributed tracing tooling, centralized logging, and engineers who understand how to use them. The monitoring infrastructure alone can cost hundreds of thousands of dollars annually.

Infrastructure overhead scales with service count. Each microservice needs its own deployment pipeline, its own monitoring, its own on-call rotation, and its own scaling configuration. A 50-service architecture requires 50 times the infrastructure management overhead of a monolith. Netflix famously runs thousands of microservices - and employs an entire team just to manage the chaos engineering that validates their resilience.

The "death star" anti-pattern. Poorly executed microservices create more coupling problems than the monolith they replaced. If service A calls service B, which calls service C, which calls service D before any user request completes, you have distributed the monolith without gaining any of the independence benefits. This pattern - visible in architecture diagrams as a web of arrows connecting every service to every other - is common and expensive to untangle.


The Shopify Lesson: Modularity Without Distribution

Shopify's decision to move toward a modular monolith is instructive because it distinguishes between two different problems that microservices is often proposed to solve.

The first problem is modularity - making sure code for different business capabilities is cleanly separated, well-organized, and independently testable. This is a genuine architectural need that becomes critical as codebases grow.

The second problem is distribution - actually running different capabilities as separate services with network communication between them. This solves the scaling and deployment independence problems, but at substantial operational cost.

Shopify concluded that they needed modularity but not distribution. A modular monolith enforces clean boundaries between business capabilities in code without paying the distributed systems overhead. Code for payments cannot directly call code for inventory - it must go through a defined interface - but at runtime they are still one deployable process.

For most companies at most stages, this is the right trade-off. The modularity discipline is valuable. The distribution overhead is not.


A Framework for Evaluating Your Team's Proposal

When your engineering team proposes a microservices migration, the business questions to ask:

What specific problem are we solving? "The architecture is messy" is not a business problem - it is an engineering problem that may or may not warrant a multi-year migration project. "We cannot deploy the checkout flow without risking downtime for the entire platform" is a specific business problem. Demand specificity.

What is the team size and scale? Microservices at 10 engineers is operational overhead for limited benefit. At 500 engineers working on a system processing millions of transactions daily, the calculus changes. The architecture should match the scale and team structure, not the other way around.

What is the migration plan? Migrating a production monolith to microservices without disrupting operations is a multi-year project of significant complexity. The team should have a phased approach - extract the highest-value services first, validate the benefits, then continue. "Big bang" rewrites have a poor track record.

What is the ongoing operational investment? Building the microservices is a one-time cost. Operating them is a permanent ongoing cost - in tooling, expertise, and engineering overhead. Get an estimate for the steady-state cost increase before approving the project.

Dimension Monolith Microservices Verdict
Initial development speed Faster - one codebase Slower - more infrastructure Monolith wins at start
Deployment independence All or nothing Per-service deployment Microservices win at scale
Operational complexity Simple - one application High - N services to manage Monolith wins throughout
Targeted scaling Scale everything Scale specific services Microservices win at scale
Fault isolation A bug can take down everything Failures contained to services Microservices win at scale
Team autonomy Shared codebase creates conflicts Teams own their services Microservices win with large teams
Debugging complexity One place to look Distributed tracing required Monolith wins throughout


Key Takeaways

  • Monoliths are not bad architecture. They are the appropriate default for most companies at most stages. Every major tech company started with one.
  • Microservices solve real problems - at scale. Deployment independence, targeted scaling, and team autonomy matter when you have hundreds of engineers and millions of users. At smaller scale, the operational overhead exceeds the benefit.
  • Shopify's modular monolith decision is worth studying. Modularity (clean code separation) and distribution (separate services) are different problems. Most companies need the former; fewer need the latter.
  • The hidden costs are operational, not developmental. The slide deck shows the upside of microservices. The ongoing overhead of monitoring, debugging, and managing N independent services often does not make the presentation.
  • Demand a specific business problem, not an architecture philosophy. "Our competitors use microservices" is not a justification. "We cannot scale checkout independently from inventory during peak load" is one.

Related Reading