Naughty, Naughty, Jim Manzi!: Inappropriate Misunderstanding of the Neyman-Pearson Hypothesis Testing Framework Strikes Again!
Kevin Drum sends us to:
Jim Manzi: "When interpreting the physical health results of the Oregon Experiment, we either apply a cut-off of 95% significance to identify those effects which will treat as relevant for decision-making, or we do not. If we do apply this cut-off… then we should agree with the authors’ conclusion that the experiment “showed that Medicaid coverage generated no significant improvements in measured physical health outcomes in the first 2 years.”
Nope.
The study did not show that Medicaid coverage generated no significant improvements in measured physical health outcomes.
The study failed to show that Medicaid generated [statistically] significant improvements in measured physical health outcomes.
The confusion of a failure to show statistical significance with a showing of a failure to have substantive significance is a basic, sophomoric error.
For it to "show that Medicaid coverage generated no significant improvements", the study would have had to have produced an 0.95 confidence interval that excludes significant improvements, wouldn't there? And there is no such interval.
So: Naughty, naughty, Jim Manzi!
Neyman-Pearson is not a trolley car. You cannot get off it where you want. You have to ride it to the end of the line.