Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This was exactly my question. Why do you even need an event loop? If awaits are just thread joins then what is the event loop actually doing? IO can just block, since other coroutines are on other threads and are unaffected.

Which is to say, why even bother with async if you want your code to be fully threaded? Async is an abstraction designed specifically to address the case where you're dealing with blocking IO on a single thread. If you're fully threaded, the problems async addresses don't exist anymore. So why bother?



Looking at the article he's not implementing `Task` with `Thread` - he's round-robinning processing `Task`s through simple `ThreadPool`. So instead of a single `Thread` making continuous progress on the work in the event loop he instead has a set of `Thread`s making progress _in parallel_ on work in the event loop. This is very much Java 21's approach to virtual threads (as well as in-language task runners like the kind you find in Scala libraries like ZIO, Monix, Cats, and the venerable Scalaz).


How is that materially different than just making async invocations thread forks and awaits into joins? I understand what the code is doing, I just don't understand what the point is, when it seems like the net effect is the same as just writing threaded code.


The difference is that you can spin up only so many OS threads and you can run several orders of magnitude more "green threads" / "tasks" like this that round-robin onto system threads that comprise your event loop executor. The key thing to understand is that `await` doesn't block the backing thread it simply stops the current task (the backing thread moves on to picking up the next ready task from the queue and running the new task to its next await point).


If I understand correctly, it sounds like the idea is to map N tasks to M threads.

I suppose it’d only really be useful if you have more tasks than you can have OS threads (due to the memory overhead of an OS thread), then maybe 10,000 tasks can run in 16 OS threads.

If that’s the case, then is this useful in any application other than when you have way too many threads to feasibly make each task an OS thread?


The idea is to map N tasks to M threads. This is useful more than just when you needd more threads than the OS can spin up. As you scale up the number of threads you increase context switching and cpu scheduling overhead. Being able to schedule A large number of tasks with a small number of threads could reduce this overhead.


Having too many threads all running at the same time can also cause a performance hit, and I don't mean hitting the OS limit on threads. The more threads you have running in parallel(remember this is considering a GIL-less setup) the more you need to context switch between the. Having fewer threads all running in an event loop allows you to manage more events with only a few threads, for example setting the number of event loop threads to the number of cores on the cpu.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: