A small addition to yesterday's
rant measured critique about the reporting of surveys.
Even when a result is statistically significant, it doesn't mean that we should draw lots of conclusions based on it. Why not?
- Some of the assumptions the statistical tests are based on may be violated (e.g. random sampling).
- Statistically significant results are not always meaningful - a difference may be real, but very small. Most stories along the lines of "men are better than women at..." fall into this category.
- 5% of differences which are statistically significant are not really there - we know that our tests have a 5% false positive error rate. If a result is particularly surprising then wait for replication before you leap to conclusions. "Extraordinary claims require extraordinary evidence" is not a metaphysical statement, it's all about the sums.
Anyone who has followed my posts about significance tests will have realised that I think there is one simple fix for most of these problems - use confidence intervals instead. You can still be wrong, of course, but confidence intervals make you think much harder about the realities of what you do and don't know.