(author here) Actually I think that applies very similarly. The same way you can't just log willy nilly in a web system because you can easily overwhelm your logging infrastructure or introduce unwanted dependencies, you have to be mindful of instrumentation in low-level code. The same way you might not want to write to flash (either you have none, you don't want to wear it out, security, no way to read it back out anyway) has parallels to how you containerize and deploy your app (as a lambda? as a docker without persistent storage?)
The real-time aspect and hard resource constraint are indeed the fundamental difference. Some of these (static memory allocation) make a lot of sense when building microservices, for example, but it's much more squishy. I personally like building my microservices very similarly to my embedded systems: event driven, with different priorities to ensure time constraints, and bounded statically allocated memory and queues. Even in languages like golang, my allocations are usually done up front, memory ownership is very clearly delineated.
While you might not have realtime in web systems, you very much want to avoid GC pauses and similar behaviour to avoid spiking your latency, as these things compound. The underlying concepts are a bit similar, at least in my head. I think "this needs to be O(1) in runtime and memory to be repeatable. If the constant factor is bad, we can work on that. But I don't need fast and elastic).
As for distributed system, I actually consider a SPI / i2c / CAN system to be the distributed system, and a lot of patterns (retry mailboxes, timeouts, promises, bounded queues, circuit breaker) make sense in both.
I definitely plan to write more about these lower level details, and provide some code examples to make the parallels more evident.
The real-time aspect and hard resource constraint are indeed the fundamental difference. Some of these (static memory allocation) make a lot of sense when building microservices, for example, but it's much more squishy. I personally like building my microservices very similarly to my embedded systems: event driven, with different priorities to ensure time constraints, and bounded statically allocated memory and queues. Even in languages like golang, my allocations are usually done up front, memory ownership is very clearly delineated.
While you might not have realtime in web systems, you very much want to avoid GC pauses and similar behaviour to avoid spiking your latency, as these things compound. The underlying concepts are a bit similar, at least in my head. I think "this needs to be O(1) in runtime and memory to be repeatable. If the constant factor is bad, we can work on that. But I don't need fast and elastic).
As for distributed system, I actually consider a SPI / i2c / CAN system to be the distributed system, and a lot of patterns (retry mailboxes, timeouts, promises, bounded queues, circuit breaker) make sense in both.
I definitely plan to write more about these lower level details, and provide some code examples to make the parallels more evident.