Many teams start by building a monolith because it helps them move quickly when they need to get something to market. As organizations grow and the product becomes more complex, it becomes more difficult to maintain and innovate on a monolithic application. But it’s also challenging to know how to break up a monolithic application into microservices: to decide it’s time to make the investment, to determine how to approach it, and to balance that work with more direct revenue needs.
We’ve seen that some of the key business triggers for evaluating this transformation to microservices come when:
- Products move up-market and need to reach enterprise customers
- Innovative features can’t be built quickly enough
- A particular component of the product has different scale and performance requirements
- The org is trying to attract and retain top tech talent
- Cloud migration creates an opportunity for app modernization
In this post, we’ll take a look at the decision-making process for when to invest in breaking down the monolith using microservices and share our advice on how to get started.
Monolithic architecture vs microservices
Let’s start with some definitions.
A microservice is an independent service that is built to model a business domain. Using microservices is therefore structuring your application to be built with a suite of microservices that collectively encapsulate all the business capabilities. Each microservice has completely independent building, testing, and releasing. This means the only coupling between services should be the exposed interface. There is minimal central orchestration and the communication is typically done using Messages (Topics or Queues), HTTP/REST or RPC.
Alternatively, we have the monolithic application which is an application that is built as a single unit. Building a monolith is more natural, where all the request handling logic runs on a single process. However, it makes it harder to keep the code modular and decoupled, and risks small changes having adverse effects on downstream dependencies. Modifications also require the whole monolith to be rebuilt and deployed. Monoliths can scale by replicating the entire application onto another server and having a load balancer distribute incoming requests. Conversely, microservices can scale by replicating only the services that are needed.
Pros and cons of microservices and monoliths
The annoyances that monoliths cause have led to microservices being highly desirable, but it’s still important to not be idealistic about what they provide. When thinking about a monolithic architecture vs. microservices, it’s helpful to consider the tradeoffs you make with each. Microservices are trading runtime complexity for build time simplicity and runtime scaling. Monoliths have build complexities and runtime simplicity, but typically higher scaling costs.
Let’s look at both sides.
The Microservice
Each microservice is independently deployable and scalable, as there is a clear boundary between each of the services. Loose coupling means that in-service changes only require the individual service to be redeployed.
This also means that if one module in your monolith requires significantly more scale than others, breaking it down can be a good way to optimize compute and memory costs because you’ll allow the module to scale independently.
On the flip side, the fundamentally decoupled nature of microservices means that remote calls must be made. Compared with in-process calls, the overhead of remote calls is significant. This means that communication interfaces can’t feasibly expose efficient functionality for fine-grained control. Instead, the approach must be replaced with coarser-grained interfaces to reduce the number of network calls. There is more overhead for developers handling each microservice’s individual deployment and management, which is sometimes referred to as the ‘microservice tax’. This tax is often only a fair trade-off in large complex environments. If you can manage the system complexity as a monolith, then the cost may not be worth it.
The Monolith
Monoliths tend to be considered synonymous with legacy, and thus there is the line of thinking that monolith applications are inherently bad. But they do have upsides in some situations. Containing code to one application means that you can just use it, instead of deciding how to share your code in a distributed system. They’re easy to reason about and usually easy to test locally since they’re just a single app to run.
However, some dev complexity is much greater in monoliths. Upgrading dependencies, for example, can be nearly impossible in complex systems where a dependency is used extensively and a new version contains breaking changes. Also, someone else’s changes can impact the performance or security of your module, so monoliths tend to make the trivial parts of debugging easier, but make the hard stuff really hard.
Overall, microservices are less complex than most people make out, and monoliths aren’t really much simpler. Our assessment: if automated testing and deployments aren’t your team’s strong point or priority (for example, if you have a smaller team and a quarterly deployment schedule), monoliths are a perfect choice. However, if you need regular granular releases and your team is investing in highly automated testing and deployment, you’ll typically be more successful with microservices.
When to start breaking down your monolith
If microservices are the right path for you, then the best time to switch is when the application meets the trade-off intersection between monoliths and microservices. This is generally where your application has become large, complex, and is supporting numerous different business capabilities. That way, each capability’s boundaries can be identified and then separated into a distinct microservice. Before decomposing into microservices, the monolith should be approaching a point where the high cost and slow pace of change make it ineffective and inefficient for new features or modifications.
As mentioned earlier, this often happens when a business needs to innovate, scale features, and meet customer requirements. While these are compelling needs, the process is still a big investment that causes teams to pause and try to push back this work. Here’s our advice for breaking down the work of breaking down the monolith 😄 so that it isn’t quite as daunting and hard to connect to business value.
How to break monolithic applications into microservices
It doesn’t have to be a massive journey to transition to microservices; there are many ways to minimize your risk and effort. The key is to make incremental changes.
An incremental approach to decomposing the monolith into microservices gives you the chance to learn about microservices while also minimising the impact they have on the production system. This means that business operations and revenue growth can still be prioritised during the refactoring.
The benefits of an incremental approach make it extremely practical. You gain feedback on each microservice, rather than finishing a ‘big-bang’ approach and realising it didn’t meet any of your goals. This feedback provides an opportunity to stop at any time. If you create a few microservices that achieve the goal set you can just stop there. It is perfectly valid to have a hybrid architecture with a monolith that has supporting microservices.
Here are a few tips for how to get started and approach breaking down your monolith incrementally.
Pick the first microservice to extract
The easiest way to start is to start small–identify a single module that you’ll extract from the monolith. Ideally, the first choice should already be heavily decoupled from the monolith, doesn’t require many changes to client-facing applications, and doesn’t interact with a data store. With these factors, the risk is low, as any downstream dependencies are unlikely to break.
Before picking which microservice to start with, remind yourself that microservices aren’t the goal. Microservices serve to achieve your goal, and you’ll want to have a clear understanding of what that is, so you don’t get caught up in attributing activity with results. Your clear goal will inform what microservice you should build first, so you start with something that will give you the most value immediately.
A final piece of advice about which microservice to start with: this first microservice is going to require new CI/CD pipelining, development tooling, testing, monitoring, logging, and securing, most of which won’t be the same as it was in the monolith. Hence, for laying this groundwork it’s a good idea to start with a simple service. It also allows the team to focus on learning and up-skilling.
Build new modules as microservices
Likely even easier than extracting an initial microservice from your monolith, start instead by building an entirely new feature from scratch. You might have a feature that’s been on your mind for a while but you’ve put it off because of the overhead it might add to your architecture. This approach gives you the benefit of focusing your development on adding new value to the product, while also allowing you to leave the existing monolith intact.
Try the strangler fig pattern
One useful pattern for employing incremental change is the strangler fig pattern. It is an incremental approach where a system grows over an old system until the old system is ‘strangled’ and can be removed. The slow incremental change means that each step can be monitored over time; making the probability of something breaking quite low. Shopify has a good walkthrough of how they refactored their core system using this method.
Rely on other tools
Using tools means you can focus on your business-level application code, and stop attempting to reinvent the wheel. Tooling is built with industry best practices in mind, so you don’t have to do the research yourself. Microservices have been ramping up in popularity for a while now, and the ecosystem has reflected this. There are hundreds of tools all tackling different problems, including messaging, logging, orchestration, deployment, and more.
For example, managed services have implemented the backend for frontend pattern with the use of API gateways. API gateways act as the entry point of service requests, taking care of proxying or fanning out requests to one or more services. The Nitric framework supplies painless tooling for creating API gateways and other commonly used resources like events, queues, and collections.
To stay most productive with your monolith decomposition, leverage tools to take away those additional burdens.
Decouple data strategically
If multiple applications are reading and writing to a central data store it can be a massive blocker to decouple the data as you start decomposing the monolith. It’s important to do it sooner rather than later, as teams can only move as fast as the slowest part. The actual process of decoupling this data depends on how stateless your data is or whether it has a tight coupling to the data store.
The delivery team needs to use a migration strategy to incrementally migrate and remove the old service’s readers/writers into the new decoupled data store. Stripe details a migration strategy that would generally work for most environments that require incremental migration of coupled applications away from production data stores. Using this method or something similar, you can approach the migration with incremental change just like the rest of the monolith decoupling.
Final thoughts
As we talk to folks taking a look at the Nitric framework, we commonly hear that breaking down their monolith is a challenge. Hopefully, these ideas give you a good starting point for approaching your monolith decomposition and app modernization with incremental changes. And of course, using the Nitric framework can help by removing the burden of deployment and infrastructure.
We’d love to hear which of these tips you try out. How do they work for you? Would you be interested in a guide on how to structure your Nitric project based on monolith vs microservice patterns?
Checkout the latest posts
Nitric adds Deno 2 support
Building applications with Deno 2 and Nitric
The Servers Behind Serverless
Examining the CPU hardware capabilities of AWS Lambda, Azure Container Apps and Google Cloud Run
Introducing Nitric for AI and More
Nitric Batch for ML, AI and high-performance compute workloads on AWS, Azure, GCP and more
Get the most out of Nitric
Ship your first app faster with Next-gen infrastructure automation