Saturday, May 03, 2008

A Nice, Clear Critique of Hypothesis Testing

Students at times come to my office to tell me that their professors in research methods courses emphasize the importance of null hypotheses and hypothesis testing. It always brings to mind the last words Gary Chamberlain said in his econometrics class: "I didn't mention hypothesis testing, because I don't like it. And don't ever use those damned stars." [I am, of course, paraphrasing, as I took the class 25 years ago].

In any event, I ran across this nice critique of hypothesis testing. The point of statistics, after all, should be parameter estimation and prediction, not some arbitrary test of statistical significance. In large samples, statistically significant parameters can be unimportant; in small samples, statistically insignificant parameters can simply mean that we don't really know what is going on.

Much of the confusion underlying how medical information is conveyed in the media arises from the hypothesis testing tradition: the media infer that if x is a statistically significant predictor of y, then x must be important in determining y. This is not necessarily so.

2 comments:

Anonymous said...

I've always found academics strangely biased towards hypotheses testing as opposed to decision theory, even though I was always taught by my 'metrics profs to distinguish between 'importance' and 'statistical significance.'

My two favorite personal experiences are from work I did on capitalization of above market financing and down payment assistance. I noticed that the literature on capitalized financing invariably tested the null hypothesis of 'beta=0' even though theory implied 'beta=1.' I beleive everyone wanted to make the easy case 'hey we got a significant coefficient' rather than address the issue of 'hey we got a different coefficient than theory implies.' In the down payment assistance case GAO found that in a small sample there was about an 85% probability that these things cost FHA money, and Stephen Fuller responded to the study by indicating that we haven't proven that it was the case at 95% probability. I looked at the critique and thought to myself "there's an 85% chance that they are losing money and they shouldn't do anything because it isn't 95%???? - what does decision theory say about that?" Anyway, later with an expanded dataset I did find a greater than 95% probability, so I guess that can finally be put to bed.

Anonymous said...

Interesting to know.