The Massachusetts results should make you revise your estimates; they should not make you throw out everything else you know. Photographer: Spencer Platt/Getty Images
The Massachusetts results should make you revise your estimates; they should not make you throw out everything else you know. Photographer: Spencer Platt/Getty Images

Jim Manzi, my favorite writer on randomized controlled trials, e-mails to dissent from my assessment of the Massachusetts study:

I think the issue of Bayesian updating of estimated ACA impacts is more complicated than indicated in your post.

First, there is another (not mutually exclusive) updating that needs to happen. To the extent that a specific methodology produces an estimate that is way off from what we believe to be true, we should also lower our estimate of the reliability of that method. As an extreme example, if a given analytical method concluded that a milk sponge generally combusts when exposed to oxygen, we might very slightly update our view of the likelihood that this is true, but we should surely radically reduce our estimate of the reliability of that analytical method. This is the famous "clock that chimes 13 times" dictum.

What becomes complicated is what the method is that we are updating our estimate of the reliability of. Is it these particular academics, is it diff-in-diff analysis as applied to U.S. social trends, is it this particular matching algorithm for treatment and control counties, etc? There is no absolute answer to this question. But as you know, I wrote a book that argued at length that diff-in-diff approaches in this kind of problem are very unreliable.

In the end, and as a practical matter, I agree that there should be some slight increase in a rational estimate of the likelihood of ACA improving health as a result of this study. But I would quantify it as what a mathematician terms "epsilon": representing a positive quantity that is always smaller than any real, positive number to which it is compared.

This is quite fair, and I think that many people read my post as expressing a more radical change in my thinking than I meant. Ultimately, I do prefer RCTs to observational studies, for the same reasons that we use RCTs, and not observational studies, to test new health-care treatments. The Massachusetts results should make you revise your estimates; they should not make you throw out everything else you know.

Tyler Cowen also weighs in:

This result seems too rapid and too large to be attributable to improved access to health care, and out of line with other more general (non-policy) estimates.

I agree that this is a big concern -- that’s why I flagged the cancer results. Cancer treatment has gotten much better in this country, but progress is still tragically inadequate. Which is why I was surprised to see them citing a significant decline in cancer mortality. Four years seems too short for early detection to be doing the work -- detecting your Stage Four breast cancer three months earlier is probably not going to save your live, or even significantly prolong it.

That also applies to cardiovascular: Heart damage takes years to accumulate, so this is unlikely to simply reflect better prevention.

That means that most of this benefit should be coming from late intervention. Yet if you show up at the doctor with an aggressive cancer, or at the ER with a heart attack, they don’t just send you home and suggest that you get some rest. They treat you. I can tell some story where it’s better angina management or whatever, but I can feel myself starting to stretch.

All that said, there’s a strong tendency, widely on display with the Oregon study, for people who see results that don’t suit their policy preferences to start hunting as hard can be for reasons that those results couldn’t possibly be true. So I am suspicious of my suspicions, so to speak.

I also note that the people who are excited about this result aren’t usually highlighting what seems like an obvious implication: that ultra-expensive, intensive chemotherapy and cutting-edge hospital-based cardio interventions probably save a lot of lives, and that the cost curve is likely to bend up, not down, if we want to replicate these results elsewhere. Most are only talking about the macro result, not the micro-mechanisms.

Which brings me back to what I originally said: This makes me update my beliefs about the efficacy of health insurance at reducing mortality. But it’s a cautious update, not a radical rethink.

To contact the author of this article: Megan McArdle at mmcardle3@bloomberg.net.

To contact the editor responsible for this article: James Gibney at jgibney5@bloomberg.net.