Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That was a nice summary.

The data-poor and computation-poor context of old school statistics definitely biased the methods towards the "recipe" approach scientists are supposed to follow mechanically, where each recipe is some predefined sequence of steps, justified based on an analytical approximations to a sampling distribution (given lots of assumptions).

In modern computation-rich days, we can get away from the recipes by using resampling methods (e.g. permutation tests and bootstrap), so we don't need the analytical approximation formulas anymore.

I think there is still room for small sample methods though... it's not like biological and social sciences are dealing with very large samples.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: