I assume that wax uses Apple's ANE to do embeddings (so no third-party services like OpenAI are needed). Did you happen to compare search quality when using ANE embeddings vs. OpenAI's text-embedding-3-large (or another commonly used online embedding)?
Would wax also be usable as a simple variant of a hybrid search solution? (i.e., not in the context of "agent memory" where knowledge added earlier is worth less than knowledge added more recently)
Yes—Wax can absolutely be used as a general hybrid search layer, not just an
“agent memory” feature.
It already combines text + vector retrieval and reranking, so you can treat
remember(...) as ingestion and recall(query:) as search for any document
corpus.
It does not natively do “recency decay” (newer beats older) out of the box in
the core call signature. If you want recency weighting, add timestamps in
metadata and apply post-retrieval re-scoring or filtering in your app logic
(or query-time preprocessing).
Ive add this to the backlog, It comes in handy when dealing with time sensitive data. expect a pr this week
I'm German myself. To me this looks like a category of problem where you can no longer translate the word in the literal sense, because chances are low that the consumer understands the word ("Bremsschwelle" or whatever you end up picking).
Wouldn't it make sense to rather think of a completely different analogy? One that is really well-known by the target audience? From what I understand, you are building an app that inhibits people from doomscrolling. That is a well-established "German" word, too. Using that, people immediately understand what you mean, rather than trying to follow a broken analogy.
I know from my own experience that us folks in the US tend towards a lot of idioms in our communication and these can be a struggle to effectively translate. Most translation software translates literally instead of figuratively.
I found myself when I first went to work with an international company having to break that habit when my European colleagues would look at me funny half the time because I used idioms a lot. Even though they spoke English perfectly, they couldn’t understand me.
2) Ship / Show / Ask (https://martinfowler.com/articles/ship-show-ask.html), where "Show" and "Ship" are non-blocking PRs (or even directly committing to trunk, if you use trunk-based development), since not every(!) PR needs reviewing and/or should block the PR creator
Looks nice. I'm a Clockify fan myself. Your app and homepage also remind me a lot of https://timemator.com/ (which I ended up not using because it was unable to generate reports that show me the percentage(!) of time spent on different projects throughout the day).
Thanks for the compliment! I appreciate the comparison. I’m glad you like the look of Taim. I'm aiming to include advanced reporting features, including the ability to show the percentage of time spent on different projects throughout the day. If you have any other suggestions or features you’d like to see, feel free to share!
Also, in my experience, writing UI code is usually more(!) work than writing the functionality underneath, because
a) styling / layouting has to be learnt from scratch (e.g. because of a proprietary language, e.g. QML or QWidgets for Qt)
b) you have to take care of every frikkin' single user interaction (which becomes worse the more dynamic and custom your UI is), and building proper accessibility is also no walk in the park
There I also explain that IF you use a registry cache import/export, you should use the same registry to which you are also pushing your actual image, and use the "image-manifest=true" option (especially if you are targeting GHCR - on DockerHub "image-manifest=true" would not be necessary).
After years of lurking, I made an account to reply to this
"image-manifest=true" was the magic parameter that I needed to make this work with a non-DockerHub registry (Artifactory). I spent a lot of time fighting this, and non-obvious error messages. Thank you!!
We use a multi-stage build for a DevContainer environment, and the final image is quite large (for various reasons), so a better caching strategy really helps in our use case (smaller incremental image updates, smaller downloads for developers, less storage in the repository, etc)