We use the microservices architecture as a single team and don’t have any issues with this for many years. The key is to have a monorepo and stay consistent by following strict coding guidelines.
In my opinion it makes the backend way more resilient than a monolith.
There are some reasons it may lead to resiliency; another teams features slow dB queries not being on your db, teams not mutating data in a shared db, memory leaks in someone elses feature not taking your app down. Being able to choose language/libraries and tune the runtime to your requirements.
Of course when you replace function calls with network calls, make everything asynchronous and eventually consistent, there is a lot of work to do to not end up with a less reliable system.
> IPC is still simpler and cheaper than network calls.
I specifically called out the extra complexity of network calls in microservices, not sure if you read the full comment.
> A monolith doesn't force a single process
I'm not convinced; if my small/specific code has it's own process, I would say it's a microservice. Sure, we can have replicas for redundancy, that doesn't mean I won't have reliability issues when my process is crashed.
> A monolith doesn't force a single database.
> A monolith doesn't force never having an external service for a specialized use case, or FFI
True, sadly it doesn't usually work this way. People take the path of least resistance.
Also once you add multiple DBs you start to get into eventual consistency; which is one of the harder parts of microservices.
> I specifically called out the extra complexity of network calls in microservices, not sure if you read the full comment.
Calling out networking doesn't preclude me from mentioning IPC. IPC isn't limited to network calls, it can be as simple as shared memory and hit millions of OPS: github.com/OpenHFT/Chronicle-Map
> I'm not convinced; if my small/specific code has its own process, I would say it's a microservice.
And you'd be wrong. A core tenant of microservices is being able to individually deploy your microservices. If I spin up a new process for some high risk, highly memory intensive process I've introduced a fraction of the operational complexity of a seperate server and retained the core value proposition of reducing its blast radius if things go south.
Of course again, if you're having so much trouble handle writing software that's reliable that you being to consider isolating instability as a top benefit from your IPC setup instead of a tiny value add... it might be a sign you're not ready for microservices.
_
> True, sadly it doesn't usually work this way. People take the path of least resistance.
> Also once you add multiple DBs you start to get into eventual consistency; which is one of the harder parts of microservices.
You're making my point: If you don't have the engineering chops as a team to make a robust monolith, you definitely don't have the skills and resources to start looking at microservices.
Eventual consistency is not inherent to having multiple databases. If I have an oft changing ephemeral set of data that only affects one feature and it's creating an impedance mismatch with our main datastore, nothing is stopping us from pulling in Redis for all the queries we were previously sending to Postgres, and as far as anything relying on that feature is concerned, nothing at all changed.
With even half decent engineering, Redis going down doesn't break any differently than it would have for a microservice: you define the same error boundaries as before and the failure case ends up the same.
I mean seriously, if your team can't handle having a second data store, imagine the bedlam when you're trying to handle multiple languages across multiple data sources in a non-centralized manner?
_
Microservices are a pattern for companies where a "microservice" gets the kind of development and devops support that would justify spinning off a new mid-sized enterprise.
When you're Netflix your `api/movies/[movieId]/subtitles` endpoint is serving the kind of traffic most companies will never see in their lifetime and needs optimizations that maybe 100 companies in the world will ever need.
For the rest of us EC2 has 224C/488T CPU 24,000 GB RAM machines with 38 GBPs I/O bandwidth. If your business ever scales so far that you outgrow that, throw some of that X Billion dollar valuation money at the problem and build your microservices.
> Calling out networking doesn't preclude me from mentioning IPC.
You made the same point I made as though it was in contradiction to what I said. Adding a network call adds complexity, yes.
> A core tenant of microservices is being able to individually deploy your microservices.
And why would you not want this to be independently deployable?
> You're making my point: If you don't have the engineering chops as a team to make a robust monolith, you definitely don't have the skills and resources to start looking at microservices.
Firstly, you never made that point. Also, I never argued against it, in fact I agree completely.
> Microservices are a pattern for companies where a "microservice" gets the kind of development and devops support that would justify spinning off a new mid-sized enterprise.
Ah sorry, I guess replying to people supporting microservices by calling out the gaps in technical knowledge they're using to justify microservices is not the same as saying ..."you definitely don't have the skills and resources to start looking at microservices"
Ah, wait it is.
> And why would you not want this to be independently deployable?
Because FAANG has more engineers devoted to managing deployment/observability/version skew/DX/scaling/security than you have engineers. Simplifying your needs in those realms helps you greatly.
That says exactly nothing. At Netflix scale their most random "trivial" endpoints are easily doing scale that entire SMEs won't ever deal with.
When FAANG is your case study in any technical discussion in a public forum, you're default wrong. I work at an AV company, I'm not about to start telling people the insane architecture we need to support ingesting petabytes of data is something that anyone else needs.
Any useful technical discussion needs to be grounded in what the 99% need, and microservices are not it.
> Ah sorry, I guess replying to people supporting microservices
Again, at no point did I make an argument for microservices.
> that's just SOA
Absolutely not, from your own reference:
> Each service provides a business capability.
Spinning a high memory task off into it's own process is not a business capability. Microservices are more granluar than SOA services, your describing a microservice.
> That says exactly nothing. At Netflix scale their most random "trivial" endpoints are easily doing scale that entire SMEs won't ever deal with.
You said microservices are for when a microservice would have the support equivelant to a medium enterprise, this is not true even at netflix scale. They absolutely have services owned by very small teams, or else they wouldn't have more than 1000.
> When FAANG is your case study in any technical discussion in a public forum, you're default wrong.
Well who do we use as a case study on microservices then?
> Any useful technical discussion needs...
A technical discussion requires nuance, not turning into a black and white one side versus the other.
Yes, you can have multiple DBs in a monolith, but you tend not to. In microservices you are basically forced to.
It's a crude and expensive way to force modularisation. However, that is still what it often achieves, it gives you infra that you can keep other people away from and lets you be in charge.
Bad input crashes app, monolith fails over, other instance crashes. Full outage.
Assuming proper vertical separation, this risk can be reduced by microservices.
Care to enlighten me? Your comment reads like "assuming the best case scenario for X and worst case scenario for Y, X can reduce the risk". Well, you don't say.
There'll always be critical microservices that keep your app running. It doesn't matter if all your other services are running if the one serving up core functionality goes down.
If your engineering rigor is so poor that you can't get reliable failovers with a monolith, god help you keeping microservices running.
It's more important to keep the number of features low. Good devs talk about aligning the architecture to requirements, and that sometimes includes microservices.
This is true. I completely agree with the minimized scope. However a dev needs to be careful to not let their service just sit unmaintained. New teams will always need engineers to maintain and improve.
You talk as if monolithic apps are vastly superior. To be forward it depends entirely on the purpose and life of the application. It is about whatever shoe fits the design.
Depends on the purpose of the application though. Monolithic is a good architecture when you have a few purposeful features and functions.
But when your design relies on many services to provide a wide variety of features you need to break out this design to allow teams to operate independently.
Mini monoliths are more popular today than traditional monoliths of the old.
Yeah no I get you. You just want a monolith to be purposeful when you design one. Not multi-purposeful. This is also at the limitation of a programming language. I am kind of kubernetes guy, but I am dying because it relies heavily on a virtualize network distributed. It would drastically increase the performance if kubernetes clusters were built like monoliths and each kubernetes node handled traffic independently. So of like keeping it all in the same rack. Only leaving the rack if needed. But I keep seeing bad technology decisions repeated over and over. I stopped pushing because some person with a bigger title would say this is good design. Big kubernetes clusters eventually fail. Multiple small clusters survive.
Replacing function call with network call does not really solve any org issues. There is pretty much 0 difference if teams are shipping "modules" for a monolith vs a microservice outside of much simplified CI/CD setup in case of monolith. You can gain some scaling efficiencies from scaling services independently but it's a minor advantage for most projects.
In two decades of development I've never once seen a monolithic architecture that with some form of shared database not be terrible for the business it powered. I certainly understand why it's very common, it's what's still being taught to most CS students in my country after all, and, it's frankly a lot easier to implement. The result, however, is always the same. It ends up being a mess where nobody can do anything because the data structures are so intertwined (and undocumented) that nobody knows how it's actually used. What happens is that monoliths become magnets for business logic, and then you bottleneck every change into requireing a select few members of your organisation. As time goes by, you end up with a giant turd that stagnates and directly hurts your business. Not by intention, but because that's exactly what happens when you make things complicated.
It's important to keep in mind that this isn't a technical problem. It's an organisational problem. In fact, there is no technical reason why monoliths would be an anti-pattern, which is likely why they are still being taught as though they weren't at many universities where professors still naively think that the MBA's aren't going to cost-cut IT at every opportunity even though their entire organisation is made up of employees who spend 100% of their working time on IT devices of some form. Similarily, Microservices, aren't really the "technical" response to this. It's how IT and digitalisation had to evolve to keep up with business demands and better generate value. The simpler and more decoupled you keep things, the better you'll be able to respond to business needs. Sure, there are a gazillion different ways to do Microservices wrong, and if you do it wrong, then you'll likely be in the same mess that you would be with a monolith, only so much worse, because now you have 9 million tiny monoliths and shared databases.
Luckily we still live in a world where everyone is somehow still OK with IT not working. We went to an appointment that isn't relevant the other day, and they had a tablet where you could register your license plate to avoid getting a parking ticket. It didn't work, so we talked with the receptionist who was like "yeah, it does that all the time, don't worry, if the systems are down then they can't give out tickets"... Fine for us, but think about that... It turned out the system was down in my entire city, which means that all those hundreds of employees who are out handing out tickets had nothing to do while their IT system was being fixed, hell, the entire company wasn't generating income for my city while their IT was down, and this was a regular occurrence? What my point with this is, is that you can do things really wrong, and still be a "successful" company, it's just that the companies who manage to generate value better (which is frankly always microservices of some form) tend to simply do better. But like I said. You can do "microservices" in a million different ways. Running two different django backends to handle different parts of Instagram could be considered having two microservices after all. The importance is how you deal with the needs of your organisaiton in a rapid fashion.
Agree with most everything you've said. Just want to point out that IT stuff not working properly is only grudgingly accepted when users are captive. Could be a government service (as in your example), corporate monopoly, or a work mandated application. If it doesn't work properly, users are stuck with it no matter what.
But for anything where there's healthy competition, this completely changes. Errors, bugs, conceptual problems, etc absolutely will have an extremely negative impact.
As an example I once worked for a company selling tickets online, but there were numerous bugs, and the system would often crash under load. Long story short, we lost many users to competitors, that company is no longer independent, and all that code is now legacy.
Compare with the monopoly situation of Ticketmaster, they are far worse than this company ever was, and are quite successful, with a large user base. That hates them ;-)
The problem with most companies and monoliths is they broke the first rule of engineering, keep it simple. A tool or service should have one purpose in mind. Mutli-tools are fine if they are used infrequently, but not one tool should share the same burden or it loses efficiency.
Same thing happens in microservices too. You just need good planning an organizing.