Recently I tried to optimize a set implementation and found that interpolation search works quite well, being competitive with HashSet in C# (both single digit ns for my use case), with zero memory overhead. Unfortunately, it only works well with uniform data, so I guess I'll give static search trees a try. This explanation was clear and extremelly well detailed.
At some point I was playing around with interpolation search on the human genome dataset, and it was really really terrible.
It had some very bad worst case behaviour when the input data has big plateaus and you're searching for something around the edges of the plateau. This can be fixed by only using interpolation for the first few iterations and doing binary search for as long as needed afterwards, but that kills a lot of the predictability of the current approach. So while I really like it in theory, in practice I'm not quite convinced it can be made to work efficiently.
There are tricks: if you need to find uniformly distributed data it is unbeatable: guaranteed to work in 5 hops or less. After 5 hops, is better to use binary search.
If data is not uniformly distributed one trick is to sort by hash of the element (store the hash with the element or calculate on the fly).
But yeah, without those adjustments it has a worst case of O(n).