## This Is Brad Being Stupid and Slow... And, in Email, Robert Waldmann Delivers the Smackdown!

OK. The Baicker et al. (2013) *NEJM* Oregon Medicaid Lottery study…

- 6387 survey respondents in the treatment group…
- 1691 of them got Medicaid because of the lottery…
- 86 of those 1691--5.1%--should have had high A1C levels…
- 0.93% points fewer of them had high A1C levels… 0.93% x 1691 = 16…
- 70 of them had high A1C levels…
- Binomial(70,1691,0.051,CUMULATIVE) = 0.038 :: Why isn't the low A1C proportion in the treatment group a statistically significant effect?…
- First-line diabetes treatments reduce A1C by roughly 1.0…
- If untreated A1C levels greater than 6.5 are evenly distributed between 6.5 and 9.0, successful first-line treatment should reduce the number of high A1C by 40%--i.e., from 86 to 52…
- Medicaid-style protocols are not well-designed to manage chronic diseases…
- It is hard to schedule Medicaid appointments…
- Isn't getting (86-70)/(86-52) = 0.47 of maximum clinical effectiveness a huge win for Medicaid in this case?…

And Robert Waldmann writes:

I have a guess (and the numbers work OK roughly). I'd ask Jon Gruber though. My thought is that the issue is using intent to treat as an instrument for the treatment. This is standard in the medical literature (would be imposed by NEJM referees except I'm sure they did it from the start)....

The "treatment" in question is called "Medicaid" (that is it sure isn't insulin or the oral anti-diabetic). The intent to treat is winning the Medicaid lottery. Only around 26% of the people who won the lottery and, therefore, could sign up for Medicaid actually did sign up for Medicaid. As I understand it, the estimate was made by

- looking at A1C>6.5% in the sample of lottery losers and lottery winners
- finding 16 fewer cases of A1C than [the] expected [344] in the sample of lottery winners
- dividing to get an effect of around one-fourth of one percent reduction of risk of A1C>6.5% caused by winning the lottery.
- assuming the lottery only has an effect via signing up for Medicaid (the fact that the assumption is obviously true doesn't prevent it from being an assumption)
- dividing something on the order of 0.25% by 0.26.
This approach is needed to eliminate bias. It is very possible that the sicker lottery winners were the ones who signed up for Medicaid. (I mean it makes sense.) It is possible that the people who signed up for Medicaid are sicker on average than the people who lost the lottery.

However instrumenting by intention to provide Medicaid (winning the lottery) reduces precision. Basically, the 16 is divided by a sample size which is 1/0.26 times as big reducing the variance of the frequency by a factor of 0.26, but then to get to an estimate of the effect of Medicade the reduction in the frequency is multiplied by 1/0.26 causing an increase in the variance by a factor of (1/0.26)^2.

So the z-score is about half what you calculate.

This still doesn't get us to p = 0.61. But, wait, the frequency among lottery losers isn't the population frequency. In fact, the error in the difference in rates is slightly less than half from the lottery winners and slightly more than half from the lottery losers. So that's another factor of a bit more than root 2. That gets me to almost roughly around p = 0.61 not p = 0.07. Oh, of course, the test is two tailed because...

I am quite sure this is all standard practice in the Medical literature. As Hippocrates said "first do no more than 5% type one error." Unfortunately, journalists and economists at the Mercatus Center are not the only people who interpret failure to reject the null as a finding that it is true.