I found microservices had the benefit of increasing release cadence and decreasing merge conflicts.
Are there complications? Sure. Are they manageable? Relatively easily with correct tooling. Do microservices (with container management) allow you better use of your expensive cloud resources? That was our experience, and a primary motivator.
I also feel they increase developer autonomy, which is very valuable IMO.
Decreasing merge conflicts sounds more like muting and/or deferring problems.
Microservice fanaticism seems to be coupled with this psychosclerotic view that world can exist in state of microservices or as monolith.
From what I've seen in last 20+ years, if I had to pick one sentence to describe fit-all enterprise setup (and it's as stupid as saying "X is the best" without context) - it'd be monorepo with a dozen or two services, shared libraries, typed so refactoring and changes are reliable and fast, single versioned, deployed at once, using single database in most cases - one setup like this per-up-to-12 devs team. Multiple teams like this with coordinated backward compatibility on interfaces where they interact.
Above all I'm saying that sentences like "microservices are better", "monoliths are better", "42 services are the best" are all stupid without context.
What your business does, how many people you have 3 or 10k, what kind of roles and seniority you have, how long you're in the project - 3 months in or 10 years in, how crystalized architecture is, at what scale you operate, how does performance landscape looks like, what kind of pre-deployment quality assurance policies are dictated by the business, are offline upgrades allowed or we're operating in 24h, which direction system is evolving, where are gaps (scalability, quality...) etc are all necessary to determine correct answer.
Building website for local tennis club will require different approaches than developing high frequency trading exchange and both will be different from approaches for system to show 1bn people one advert or the other.
Seeing world as hotdog and not-hotdog (microservices vs monoliths) makes infantile conversations. There is nothing inherently wrong with microservices, monoliths or any of approaches to manage complexity ie:
- refactoring code to a shared functions
- encapsulating into classes or a typed object
- encapsulating into a modules
- simply arranging code into better directory structures, flattening, naming things better, changing cross-sections ie. by behavior instead of physical-ish classes and objects
- extracting code to packages/libraries inside monorepo or its own repository ie. open sourcing non-business specific, generic projects or rely on 3rd party package/library
- extracting into dedicated threads, processes, actors/supervisors etc.
- extracting to service in monorepo or dedicated repository or creating internal team to black box it and communicate via api specs or use 3rd party service
...bonus points for:
- removing code, deleting services, removing nonsensical layers of complexity, simplifying, unifying etc.
What is "correct tooling"? I haven't found anything that's remotely close to providing a nice stack trace, for example. How "micro" are your services? Can it manage to stand up a local dev environment? How do you deal with service interface versioning? Is this great tooling vendor tied?
On the stack trace, I think this is what the modern "observability" stuff is all about; traces, wide events, etc. One event per request. DataDog will say they do this, Honeycomb will say they do it and that DataDog is kinda lying, now there's OpenTelemetry, it's a deep rabbit hole.
It's easy to say this is a lot of work to reinvent something you get for free with a (single language) monolith, but at least it's recognized as a problem worth solving.
The stack trace bit is hard. For local development, ideally there's some fancy service discovery where your local service can hit a development instance of another service if you don't have a version running locally.
Given sufficiently carefully designed logging (with a request id that starts at the outermost service and gets propagated all the way through) you should be able to see the equivalent in the logs from a development set of services when something goes wrong. Pulling a full request's logs out of production logging is a bit trickier.
For me, it boils down to "this is absolutely doable but I'd still rather have as few services as possible while still maintaining useful levels of separation" - at least for the primary business services. Having a bunch of microservice like things serving pure infrastructure roles can be much cooler depending on your situation.
I agree that such a system could be designed and built. I just haven't seen any tooling that provides it, the next thing to free, like modern languages do. As far as I can tell, you have to have an experienced developer craft the system such that these features work. They don't come out of the box with any tool set I've seen, and I'm still looking and asking.
Lightstep is pretty cool and will show you something like a stack trace across systems, with timings, but it's not for free (monetarily, or dev cost to integrate it into your stack)
Yes, I see. I tried to understand their pricing. 10,000 active time series? Does that mean it will store and let me view 10k top level API calls for their fee? 10k separate services? I don't quite understand how this maps to actual use.
$100/service. Is that top level service? To cover a 100 endpoints at one service each is going to be 10k/mo?
I believe the 10k time series is how many complete end to end traces (across multiple services) it will store at once. Lightstep has settings where you set sample rate, retention, which traces you want to collect etc.
I believe $100/service is top level service, not endpoint. But really not sure.
I agree. I think organizational scalability is an important benefit of microservices that doesn't always come up in these discussions. Having smaller, more focused services (and repositories) allows your organization to scale up to dozens or hundreds of developers in a way that just wouldn't be practical with a monolithic application, at least in my experience (I'm sure there are exceptions).
There are techniques to allow large teams to work on monoliths together. They take planning and discipline, but overall I would say are far more reliable than microservice explosions for similar sized systems, because the earlier you manage the integration the less work it is. ie what you pay at source integration time is less than what you pay dealing with deployments, infrastructure and especially support across distributed systems which can get real expensive real fast.
I've worked on multiple systems with around 50 developers contributing fulltime to them, very practically.
The merge conflicts tend to be things like library updates or large scale refactors. It's massively easier to update a core framework 20 times than it is to do it once on a repo 20x larger due to merge conflicts and the minimum possible work being 20x larger.
Are there complications? Sure. Are they manageable? Relatively easily with correct tooling. Do microservices (with container management) allow you better use of your expensive cloud resources? That was our experience, and a primary motivator.
I also feel they increase developer autonomy, which is very valuable IMO.