Not strictly related, but in Redis intersections between sets encoded as an array of fixed length 16, 32, or 64 bit numbers (we use this encoding for small sets) may be performed very fast because they are ordered, so you can simply take two pointers in the two sets and advance only taking common elements.
Of course there is a tention between fast intersections (that require a sorted data structure) and O(1) existence test of elements. It depends on what you are trying to accomplish.
With caching, 512 intset entries, and 64 bit intset values, that's under 1 microsecond to intersect using the naive binary search algorithm for intersection.
I don't believe that small sets are an issue for performance.
For me, the real question is whether there are ways of getting good performance and lower memory overhead across the entire range of object sizes in Redis.
Of course there is a tention between fast intersections (that require a sorted data structure) and O(1) existence test of elements. It depends on what you are trying to accomplish.