That's been my experience as well. I like the idea of Beads, but it's fallen apart for me after a couple weeks of moderate use on two different projects now. Luckily, it's easy to migrate back to plain ol' Markdown files, which work just as well and have never failed me.
I went for cursor on the $200 plan but i hit those limits in a few days. Claude code came out after i got used to cursor but I've been intending to switch it up on the hope the cost is better.
I go api directly after i hit those limits. That’s where it gets expensive.
I haven’t used Cursor since I use Neovim and it’s hard to move out.
The auto-complete suggestions from FIM models (either open source or even something Gemini Flash) punch far above their weight. That combined with CC/Codex has been a good setup for me.
> another factor to consider is that if you have a typical Prometheus `/metrics` endpoint that gets scraped every N seconds, there's a period in between the "final" scrape and the actual process exit where any recorded metrics won't get propagated. this may give you a false impression about whether there are any errors occurring during the shutdown sequence.
Have you come across any convenient solution for this? If my scrape interval is 15 seconds, I don't exactly have 30 seconds to record two scrapes.
This behavior has sort of been the reason why our services still use statsd since the push-based model doesn't see this problem.
So true. beads[0] is such a mess. Keeps breaking often with each release. Can't understand how people can rely on it for their day-to-day work.
[0] https://github.com/steveyegge/beads
reply