Hacker Newsnew | past | comments | ask | show | jobs | submit | singledigits's commentslogin

Thank you for your feedback and appreciation.

During development, I initially tried implementing coroutines in a way that executing them without spawning a new thread would be possible. However, it introduced complications, so I eventually scrapped that approach.

Now, with eye on potential improvements, I can revisit this idea from the perspective of I/O operations.


Thank you for bringing this up.

I hadn’t come across folly::Coro until now. It does seem quite similar at first glance, and some of the utility functions they have are ones I’m also planning to implement, as others have also pointed out are currently missing.

One difference is that they use a custom Try<T> type for handling exceptions and values, I’ve opted for std::expected introduced in C++23. I’ve also added "monadic-like" chaining of tasks.

Overall, it’s a very similar library, and I’ll definitely look into it for inspiration and potential improvements.


Thanks for spotting that! I appreciate the feedback and will update the README.


Thank you for asking!

I've skimmed through Taskflow, and from what I understand, its main focus is on graph parallelism, allowing users to express computations as a graph.

I haven't done extensive benchmarking against 3rd-party libraries yet, which others have also mentioned. I'll definitely do more performance testing in the future to better assess and optimize performance.


I think that’s about right on focus, I brought it up more for the “modern c++” aspect…


Thank you for clearing that up.

Regarding modern features, for example, the return value from the executor in TaskFlow is a custom tf::Future derived from std::future. This means if you want to check for the result, you need to use a try-catch block with the get() method on the future.

Personally, I prefer using std::expected for value/exception handling. It allows checking for errors with a simple if statement, and if you don't want to handle the exception, you can just return an error value from the coroutine.

As for "monadic-chaining," TaskFlow can achieve the same thing with its easy graph construction, you can set up a graph and execute it, which is comparable to and_then chaining in my library.

Another point is related to performance and ease of use. In a simple example like Fibonacci, TaskFlow requires using subflows, which I feel is less "ergonomic".

Overall, for simple task parallelism, if you don't need the graph expressiveness of TaskFlow, I believe my library is a more "ergonomic" choice (though I may be biased here). I also find value handling simpler in my library with std::expected. That said, TaskFlow is a much larger library with more features like GPU integration.

I hope this better addresses your point!


I haven't compared performance with the P2300 proposal yet. It seems like it's trying to unify asynchronous and parallel execution for C++, which is much broader in scope than my library.

It's true that coroutines can avoid heap allocation, but I haven't tested when or if that happens in my implementation. From the papers, it's clear that certain conditions must be met for the compiler to optimize this. If you know of any good sources on this, please let me know.

I think it's definitely worth looking into this optimization and possibly using custom allocators for specific cases. I'll also compare performance with the proposal's implementation[1] to see the difference.

Thank you for your feedback.

[1] https://github.com/NVIDIA/stdexec


Thank you for your response!

I don't have experience with WinRT, but it does seem quite similar at first glance. One of the key reasons I focused on modern C++ was to ensure cross-platform compatibility. However, I completely understand that if you're working on Windows and are already familiar with WinRT, sticking with it makes perfect sense. I'll take a closer look at WinRT to see if there are any significant differences.


My suggestion is aim for compatibility with cppwinrt, but not anything else. That way devs can freely intermix and get the best of the utilities of both.


Thank you for your feedback.

I understand that working with tasks and retrieving values can feel a bit clunky. The main reason I've structured it this way is that individual tasks are RAII objects, and their coroutine state is destroyed once they go out of scope. However, I could modify the awaitable returned from wait_tasks to store tasks, and then return values directly to the user. This could definitely be a more ergonomic overload for the function. I'll look into it!


Hopefully, you find it useful! If you have any ideas or suggestions for improvement, feel free to open an issue or let me know. Thanks for considering it!


Thank you for your question.

I've included a link to Lewis Baker's blog (the author of CppCoro) in my repository as an excellent explanation of coroutines. From my understanding, after reviewing his library, it is no longer in active development and hasn’t been updated for a couple of years. CppCoro was an experimental library intended to explore coroutines while they were still an experimental feature. For example, CppCoro uses a custom type for storing values, similar to std::optional from the standard library (if I'm not mistaken).

For my implementation, I've opted to leverage std::expected from C++23 for storing values. I've also implemented monadic-like chaining. CppCoro, however, seems to focus more on asynchronous operations, whereas my library focuses more on task-based parallelism.

I don't have experience with Boost.Cobalt, so I can't provide insights there, but I will definitely look into it now that you've mentioned it.

Hope this helps.


Thanks for the update! Sorry, lately I had not been much around.

Boost.Cobalt can be found here: https://www.boost.org/doc/libs/1_85_0/libs/cobalt/doc/html/i...


I feel you! Coroutines can be tricky at first. I recommend Lewis Baker's blog about coroutines [1], which is detailed and insightful. Additionally, cppreference [2] is a great resource to understand how coroutines work in C++.

In a nutshell, C++ coroutines are almost like regular functions, except that they can be "paused" (suspended), and their state is stored on the heap so they can be resumed later. When you resume a coroutine, its state is loaded back, and execution continues from where it left off.

The complicated part comes from the interfaces through which you use coroutines in C++. Each coroutine needs to be associated with a promise object, which defines how the coroutine behaves (for example, what happens when you co_return a value). Then, there are awaiters, which define what happens when you co_await them. For example, C++ provides a built-in awaiter called suspend_always{}, which you can co_await to pause the coroutine.

If you take your time and go thoroughly through the blog and Cppreference, you'll definitely get the hang of it.

Hope this helps.

[1] https://lewissbaker.github.io/ [2] https://en.cppreference.com/w/cpp/language/coroutines


They're just green threads with some nice syntax sugar, right? Instead of an OS-level "pause" with a futex-wait or sleep (resumed by the kernel scheduler), they do an application-level pause and require some application-level scheduler. (But coroutines can still call library or kernel functions that block/sleep, breaking userspace scheduling?)


Yes, exactly. Coroutines are one possible implementation of green threads. Once they are scheduled/loaded on an OS thread, they behave just like regular functions with their own call stack. This means they can indeed call blocking operations at the OS level. A possible approach to handle such operations would be to wrap the blocking call, suspend the coroutine, and then resume it once the operation is complete, perhaps by polling(checking for completion).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: