Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not strictly related, but in Redis intersections between sets encoded as an array of fixed length 16, 32, or 64 bit numbers (we use this encoding for small sets) may be performed very fast because they are ordered, so you can simply take two pointers in the two sets and advance only taking common elements.

Of course there is a tention between fast intersections (that require a sorted data structure) and O(1) existence test of elements. It depends on what you are trying to accomplish.



But it's only for limited sized sets, 512 entries with the default configuration. Our sets were millions of entries.


Sure, but we may implement this soon or later so that small sets will be intersected very fast.


With caching, 512 intset entries, and 64 bit intset values, that's under 1 microsecond to intersect using the naive binary search algorithm for intersection.

I don't believe that small sets are an issue for performance.

For me, the real question is whether there are ways of getting good performance and lower memory overhead across the entire range of object sizes in Redis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: