When to use microservices

First published at Tuesday 17 January 2023

When to use microservices

This is a blog article out of a series where I reflect on what I learned during funding, growing (, and selling) SaaS product companies. While I write those things down for myself, since I believe self-reflection is a crucial learning tool, I also want to share my thoughts with anybody who might be interested. I'm looking forward to your comments on any of these topics. An overview of all related topics can be found below. I have a bunch of these blog posts lined up, so you also might want to follow me if you like this one.

For me there are three reasons to split your monolith into a dedicated (micro-)service:

  1. Unique scaling needs

  2. Requires a dedicated team

  3. Requires a significantly different tech stack

I don't see any value in splitting for any other reasons then the ones mentioned above. Every time you split a service you'll introduce additional layers of complexity:

  • Network communication is less resilient then in-process communication

    Network communication is going to fail. It always will. Even if a service runs in the same data center and is deployed and scaled automatically there are still many different error scenarios which you'll experience. While also in-process communication has certain fault classes network communication always just adds to this. And at any non-trivial scale this is going to happen at some point.

  • Release coordination will get harder

    You'll have two choices: Always release all services at once, or you'll have to fight forward and backward compatibility concerns between services.

  • Deployments are more complex

    Even with full infrastructure automation, which should be the default in any SaaS product company, deploying multiple services will be harder. The tech stacks will diverge over time and so will be the basic automation.

While this is simplified and there are even more issues it should give you an idea while it might make sense to think about this twice. I'll also write about ways to ensure clean domains (bounded contexts) in monoliths and maybe even why technical measures don't solve organizational challenges.

Separate release processes

There was a discussion on Twitter recently spawned by myself talking about this topic in a podcast: Does it makes sense to split your product into multiple services to be able to release those services independently?

I'd say no, mainly for two reasons:

  1. All of your software should always be in a releasable state

    If you don't want to see a feature live yet, hide it behind a feature flag (I'll write more about this topic at some point), so that you are always able to release any component of your stack anytime. How else do you fix an important bug or a security issue immediately? How else do you enable multiple teams to safely deploy your product multiple times a day?

  2. You don't get the mentioned forward and backward compatibility issues

    If you want to release and roll back services by themselves you'll have to pay close attention that those releases will remain compatible. Remember though, that you, at some point, will always have larger refactorings performed which will change APIs.

    I will also write more about change rates while finding your product market fit and how change rates and backward compatibility are, in some parts, very different for high performing SaaS products compared to Open Source software.

I'll write more about related topics, because it also effects the commonly used feature branches, extensive pull request reviews, and repository structures.

Reasons to actually split into services

As mentioned above, there are still reasons to split things into dedicated services, and you should not force everything into a single monolith. My experiences tell me, that most often when one of the following three requirements will be true, it is most likely that it comes with at least another one from this list:

Unique scaling needs

Sometimes one part of your infrastructure scales differently compared to any other part. A common example could be an audit log, which receives a lot of write requests and a very limited number of easily cachable read requests, while other parts of your product are read-heavy.

A different example could be a service continuously communicating live updates between multiple users, while the rest of your product's usage profile is closer to a common CRUD application.

Those are two examples, which also might have implications on the technology your team will choose. But most importantly the load you'll see on this service will probably be very different compared to other services, and they might have to be scaled in different situations.

Requires a different team

While your company and your engineering team(s) grow you'll have to split teams. Most people believe that a team size of 3 to 5 developers usually is optimal, and I wouldn't object personally. When the backlog for one part of your stack grows beyond what such a team can implement you'll have to add additional team members and then split the team. In this scenario it is usually worth to also identify sets of bounded contexts, which then can be split into different services.

To me the main reason here is collective code ownership and close collaboration inside a team. If multiple teams work on the same part of your code base it, at some point, almost always ends up with an "us versus them" moment. There is a reason for the "optimal" team size mentioned before, which is (simplified) the amount of direct personal context and contact people can continuously maintain and coordinate in-between. If the team size grows beyond that you'll get hidden subteams without clearly defined boundaries.

So you'll be better off by clearly defining the boundaries and responsibilities on a service level and splitting the teams accordingly.

Requires a significantly different tech stack

There are some situations where something really requires a different tech stack. By believing in boring old technology, which usually is stable and usable in many different scenarios, this does not happen too often, though. The less different technologies you use the easier hiring and maintenance will be, so be careful about this.

But there are some situations, where this will be true anyway. I consider PHP still an excellent choice for any common web load, but if you need web sockets, please use something else like Go or Node. I consider MySQL still an excellent database management system capable of many different load scenarios, but there are cases where a CouchDB, Cassandra or something entirely different will be required to fulfill esoteric needs.

The hidden fourth reason

There, of course, is a hidden fourth reason I didn't cover here: If there is a performant, well-designed, domain-fitting, and scalable SaaS product: Use it. Any software you don't have to write, deploy, extend, and maintain yourself, while it can power your business requirements at OK costs: Go for it. There is never a reason to re-invent off-the-shelf software.

Summary

I like a solid monolith build with boring tech. Combined with a good software design this will always be possible to split this into (micro-)services, once you reached product market fit and keep costs and efforts low until then. With product market fit you'll get requirements from the list above, which will provide you with important reasons to split your services.

Subscribe to updates

There are multiple ways to stay updated with new posts on my blog: