The Fallacy of Multiple Tests
Suppose the probability of a Type I error is 0.05.
For two independent tests, the probability of at least one Type I error is 1-(1-0.5)2 = 0.0975. For ten independent tests, the probability of at least one Type I error is 1-(1-0.5)10 = 0.4013. For 20 tests, it's 0.6415.
The moral here is that if you do enough tests, you are bound to find a Type I error.
To put this another way, suppose you run a regression for 50 datasets, one for each state. Your expected number of Type I errors is 50 x 0.05 = 2.5. Now, when you find that beta is significant for Utah and New Jersey, is it reasonable to toss out the results for the other 48 states and announce that you have found significance for Utah and New Jersey?
This is basically what happens when you run lots of regressions and only report the ones with nicely significant coefficient estimates.
Data Mining (Wikipedia) Beating on the data with a rubber hose.