Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great read.

I found it interesting that they basically replicated the XP playbook: cross-functional teams, continuous code review with pairing, collective team ownership of code and results, bounded contexts.

It makes me wonder if the "microservice" part of it was necessary. What if they had produced "microlibraries" rather than "microservices?"



>It makes me wonder if the "microservice" part of it was necessary. What if they had produced "microlibraries" rather than "microservices?"

I've wondered that every single time somebody has touted the benefits of microservices.

The people who have really 'succeeded' in it seem to conflate the benefits of looser coupling between dependent software systems (which is always a good thing) and making those systems talk to one another over a network socket (which isn't necessarily a good thing).


The benefits of microservices aren't architectural. The point re. library APIs vs services APIs is completely correct, because that's actually not what the microservice style addresses.

The point is rather organizational/operational - Being able to independently deploy, scale, monitor and manage different parts of the system.

That allows you to distribute ownership of the system, and also allows partitioning, so that a failure in one component need not effect the whole system.


organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations

—M. Conway


Could you give an example of an architectural benefit vs operational benefit?


When I say 'architectural' there I mean that as in the underlying design. How the system is modeled and and what different objects and interfaces exist in that model.

Eg. Say you have a billing component in your system. You may have the same underlying billing component in either a monolith or as microservices. It may be a nicely decoupled in both cases (either existing as an independent module/library in the monolith case, or as an independent service in the microservices case.)

The (potential) benefit that's afforded from the microservices approach is not that there's a better underlying design there, but that a single team or developer can properly take ownership of deploying, scaling, monitoring and managing the service, independently of the rest of the application. (Plus scope for partitioning and graceful partial failures vs whole system failures)


Yeah I'm pretty skeptical of the benefits. It seems like better isolation within your monolithic application solves a lot of these issues, no need for a network socket.

If you can't isolate things well enough for some reason, maybe it makes sense to have separate services (maybe run them all on the same machine, deployed at the same time and talk over a local socket?), but even then I suspect you just need normalservices not microservices.


Maybe adding the network socket makes the isolation within the code a requirement as opposed to a best practice. By this mechanism maintaining isolation is a requirement and not something someone can bypass "just this once" but fix it later.

Perhaps you're both right.


>Maybe adding the network socket makes the isolation within the code a requirement

I've worked with microservice applications that had extremely tightly coupled services that were all highly dependent upon one another.

Adding the network socket layer just magnified the problems caused by the tight coupling.

So yeah, if you want to make your life even more miserable, split up a tightly coupled "macroservice" into a series of tightly coupled microservices.


So I guess you're saying that microservices are pointless, the real problem is tight coupling?


Kind of.

There's a flood of blog posts (including this one I think) that have conflated the two, probably unintentionally. I'm happy that decoupling their systems worked out well for them. I'm not so happy this is fomenting a new fashion for creating unnecessary network API end points.

That isn't to say that you should always combine your services into one big mega-service. Just that dividing up services should be something you do only when it becomes obviously necessary and for good reasons unrelated to coupling.


I've tried that in the past. Even if modules aren't tightly coupled, deployment is, so different teams need to synchronize at deployment time. Resource isolation is also a big problem, if a module update introduces a performance bug, it will affect everything else. Yet another problem is keeping shared libraries in sync; if you want to update a core lib for component X, it will need to be updated (and tested) for everything else.


Why is deployment coupled? Or rather, why is there a need to synchronize at deployment?

I like the idea of microservices, but I think they're overkill for most systems. By that, I mean that I see the benefits, but I think people discount the skyrocketing development and operational complexities that come with distributing a system. I heard a quote recently that "the best services are extracted from existing systems, not designed up front." I think that's right. Microservices are great IF you need them, and it's really hard to get the bounded contexts right up front with an intuitive, usable API.

Anyway, one of the benefits of microservices is that it forces you to really think about your "public" API. Any decent implementation will have some notion of API versioning. So, team A can truck along with updates, deploy them whenevs, and team B can move to the new version of A when they are ready.

Of course, supporting multiple versions is more work for team A and requires more careful planning of the upgrade path. And there will come a point when team A has to drop support for older versions. "C'mon folks, we're on version 4 of A; everybody has to move to version >=3 within 6 months." But that's just part of having truly isolated services, I think.

I don't see why you couldn't have a similar approach with versioning module API's. Right?

I think your other points are spot on. Things like performance (and error) isolation can be handled through other means, but a services approach (deployed to separate boxes, I'm assuming) makes it cleaner. And it, again, forces you to think about what happens if the dependency is unavailable. Maybe we push updates to a queue, maybe we use some async fetches here with a fallback default if we don't get a response in N MS, etc. Not that you can't do these things in a monolith, but they "feel weird" and require more rigor than most teams can maintain in the face of deadlines, i.e., it would be a whole lot easier to just call this method in this other module. Microservices/SOA force it to happen.


>I heard a quote recently that "the best services are extracted from existing systems, not designed up front." I think that's right.

Damn right. Architecture should be an emergent property of your system and built incrementally. The people who do it up front almost always do it wrong.


>I've tried that in the past. Even if modules aren't tightly coupled, deployment is, so different teams need to synchronize at deployment time.

No they don't. There's no reason why two different teams can't schedule an upgrade of the same service at different times. The riskiness of this is entirely dependent upon how good your integration test suite is.

>if a module update introduces a performance bug, it will affect everything else.

The module will still affect everything that is dependent upon it if it is rebuilt as a microservice. You're just moving the performance problem from one place to another.

>if you want to update a core lib for component X, it will need to be updated (and tested) for everything else.

Ok, so upgrade the library and run the full set of integration tests.


> The module will still affect everything that is dependent upon it if it is rebuilt as a microservice.

Your services are running on different servers (or containers) to each other, so they're partitioned. If one service has a bug that introduces a catastrophic error that takes all the server resources you'll either:

Monolith: Bring down the service completely. Microservices/SOA: Timeouts to part of the system, and partial loss of functionality.

(Assuming you've done a decent job of engineering for partial failure)


>Monolith: Bring down the service completely.

Unless you've scaled your "monolith" horizontally, in which case it takes out one server.

If you've got a decent system, it can self heal from that and ping you via a monitoring system.

>Microservices/SOA: Timeouts to part of the system

Causing all manner of annoying behavior and difficult to track down bugs like an endlessly loading web page on a completely different system that happens a couple times a week instead of a clear error message.

>Assuming you've done a decent job of engineering for partial failure

If you assume a fantastic job done when engineering then you can make the worst architectural patterns "work". It doesn't mean that they are good ieda.


There's absolutely no reason why you can't timeout within your monolith.


Indeed. But you can't guard against catastrophic system failures (out of memory, disk, processor time, corruptions) in the way you can with independant services.


>It makes me wonder if the "microservice" part of it was necessary. What if they had produced "microlibraries" rather than "microservices?"

So the OP did discuss considering micro-libraries (perhaps via Rails engines), but they decided not to.

> We discussed using Rails engines and various other tools to implement this... At the deployment side, we would need to make sure that a feature can be deployed in isolation. Pushing a change to module to production should not require a new deployment of unrelated modules, and if such deployment went bad and production was broken the only feature impacted should be the one that suffered the change...

It goes on a bit. I think the reasons against this approach for them aren't entirely clear in the discussion, it would be good to hear more.

Although as a Rails dev myself, this one rings true:

> The code had suffered a lot during the past few years, tech debt everywhere. Besides the mess we made ourselves, we still had to update it from Rails 2.x to 3, and this is a big migration effort in itself

The ability to migrate from Rails 2 to 3 one service at a time is actually a pretty huge benefit, since that migration was monstrous. This is probably generalizable.

One other thing their ultimate microservice approach got them was the ability to write different services in different languages, and thus gradually transition to clojure/scala. I don't know if that was part of the original analysis; or if everyone would consider this a benefit. :) But it worked out for them.

Lately the reason microservices are a _pain_ is pretty clear to me; it is good to get an essay like the OP grounded in very specific experience on how microservices worked out very well for them. It does seem to make sense by the end. As the OP also says at the beginning, this is as much for organizational reasons as technical reasons. I suspect you need a fairly large team, where the microservices can be divided up amongst different developers, as in OP, before the benefits can start to outweigh the costs.


I think you have to read to the part where he talks about the deployment impacts. A key part of what happened was they needed deployment flexibility as well. From their perspective, they would need to implement the same basic infrastructure to achieve that flexibility:

But even if everything went smoothly, we knew that the current code for the monolith had to be refactored anyway. The code had suffered a lot during the past few years, tech debt everywhere. Besides the mess we made ourselves, we still had to update it from Rails 2.x to 3, and this is a big migration effort in itself.

This was probably critical their ability to adopt new technologies, like Clojure and later Scala.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: