From Lizka: I really enjoy the blog, “Statistical Modeling, Causal Inference, and Social Science.” Andrew Gelman, one of the authors of the blog, has given me permission to cross-post this post, which I thought some Forum readers might find interesting.
Paul Alper sends along this news article by Sarah Kliff, who writes:
Three years ago, 3.9 million Americans received a plain-looking envelope from the Internal Revenue Service. Inside was a letter stating that they had recently paid a fine for not carrying health insurance and suggesting possible ways to enroll in coverage. . . .
Three Treasury Department economists [Jacob Goldin, Ithai Lurie, and Janet McCubbin] have published a working paper finding that these notices increased health insurance sign-ups. Obtaining insurance, they say, reduced premature deaths by an amount that exceeded any of their expectations. Americans between 45 and 64 benefited the most: For every 1,648 who received a letter, one fewer death occurred than among those who hadn’t received a letter. . . .
The experiment, made possible by an accident of budgeting, is the first rigorous experiment to find that health coverage leads to fewer deaths, a claim that politicians and economists have fiercely debated in recent years as they assess the effects of the Affordable Care Act’s coverage expansion. The results also provide belated vindication for the much-despised individual mandate that was part of Obamacare until December 2017, when Congress did away with the fine for people who don’t carry health insurance.
“There has been a lot of skepticism, especially in economics, that health insurance has a mortality impact,” said Sarah Miller, an assistant professor at the University of Michigan who researches the topic and was not involved with the Treasury research. “It’s really important that this is a randomized controlled trial. It’s a really high standard of evidence that you can’t just dismiss.”
This graph shows how the treatment increased health care coverage during the months after it was applied:
And here’s the estimated effect on mortality:
They should really label the lines directly. Sometimes it seems that economists think that making a graph easier to read is a form of cheating!
I’d also like to see some multilevel modeling—as it is, they end up with lots of noisy estimates, lots of wide confidence intervals, and I think more could be done.
But that’s fine. It’s best that the authors did what they did, which was to present their results. Now that the data are out there, other researchers can go back in and do more sophisticated analysis. That’s how research should go. It would not make sense for such important results to be held under wraps, waiting for some ideal statistical analysis that might never happens.
Overall, this is an inspiring story of what can be learned from a natural experiment.
The news article also has this sad conclusion:
At the end of 2017, Congress passed legislation eliminating the health law’s fines for not carrying health insurance, a change that probably guarantees that the I.R.S. letters will remain a one-time experiment.
But now that they have evidence that the letters had a positive effect, maybe they’ll restart the program, no?
The accidental experiment that saved 700 lives (IRS & health insurance)
Link post
From Lizka: I really enjoy the blog, “Statistical Modeling, Causal Inference, and Social Science.” Andrew Gelman, one of the authors of the blog, has given me permission to cross-post this post, which I thought some Forum readers might find interesting.
As an aside, I like many other posts on the blog. Two examples are “Parables vs. stories” and “The social sciences are useless. So why do we study them? Here’s a good reason:.”
Paul Alper sends along this news article by Sarah Kliff, who writes:
This graph shows how the treatment increased health care coverage during the months after it was applied:
And here’s the estimated effect on mortality:
They should really label the lines directly. Sometimes it seems that economists think that making a graph easier to read is a form of cheating!
I’d also like to see some multilevel modeling—as it is, they end up with lots of noisy estimates, lots of wide confidence intervals, and I think more could be done.
But that’s fine. It’s best that the authors did what they did, which was to present their results. Now that the data are out there, other researchers can go back in and do more sophisticated analysis. That’s how research should go. It would not make sense for such important results to be held under wraps, waiting for some ideal statistical analysis that might never happens.
Overall, this is an inspiring story of what can be learned from a natural experiment.
The news article also has this sad conclusion:
But now that they have evidence that the letters had a positive effect, maybe they’ll restart the program, no?