This is a very nice review, but in practice I've found the K-S test to be much less useful than it initially appears:
1. Failing to reject the null hypothesis is not the same as accepting the null hypothesis. That is, concluding "these data are from some distribution X" is spurious.
2. There's a 'sweet-spot' for the amount of data. If you have too few samples, it's very easy to fail to reject; and if you have too many, it's very easy to reject (the chart at the bottom of the "Two Sample Test" section illustrates this).
3. The question "are these data from some distribution X?" is usually too strong. It's usually more informative to ask "can these data be modelled with some distribution X?"
Agree with you on all three, but specifically for 1., can you think of pathological pairs of distinct distribution that the test would often fail to reject?
The article says it's poor at detecting differences in the tails and much better at differences in the medians. So that's where I'd start to find problems.
Playing with the tails make all kind of mistakes possible, but that seems like a criticism that would apply to any attempt to identify a distribution based on sample.
1. Failing to reject the null hypothesis is not the same as accepting the null hypothesis. That is, concluding "these data are from some distribution X" is spurious.
2. There's a 'sweet-spot' for the amount of data. If you have too few samples, it's very easy to fail to reject; and if you have too many, it's very easy to reject (the chart at the bottom of the "Two Sample Test" section illustrates this).
3. The question "are these data from some distribution X?" is usually too strong. It's usually more informative to ask "can these data be modelled with some distribution X?"