I’m not sure the phone study has the traditional weaknesses as cross-sectional studies. It’s a bit more like a panel study where you can track in very fine-grained detail what events are happening and what happens to subjective wellbeing at the time. Because the event data is so fine-grained and there are so many contiguous datapoints, it provides very good evidence of causation. These sorts of studies also give intuitively plausible results for all other events. People don’t like work, getting divorced, being unemployed, being widowed; people like sex, seeing their friends etc.
It’s true that people opt in, but I don’t see any particular reasons to think that this would have a bias towards ‘happy drinkers’. The same is true for other life events. Like maybe there is some bias such that people who enjoy sex are more likely to opt in to phone-based subjective wellbeing studies, but I don’t think that is what is driving the results.
I’m surprised you think it provides “good evidence of causation”. Having fine grained data and many datapoints doesn’t as far as I can tell do anything to counter selection bias and confounding. Usually these kind of studies would not even claim that themselves. I’m going to have to read the study properly now rather than just skimming lol.
How do you get away from the confounding? People drink at social events with friends, people drink in the evening when they are relaxing anyway. Are people drinking because they are happy it happy because they are drinking?
And self selection seems like a pretty massive deal why do you think it isn’t? Seems likely to heavily select for “happy” drinkers f again I could be missing something.
Re confounding, the headline estimate that James uses is adjusted for various potential confounders.
“Aside from controlling for all time-invariant factors using FE models, we control for a variety of moment-specific factors, including: what people were doing (40 activities), who they were doing it with (7 types), time of day (three-hour blocks split by weekday vs. weekend/bank holiday), location Page 16 (inside/outside/in vehicle and home/work/other), and how many responses the participant has previously given. OLS estimates also include time-invariant controls for gender, employment status, marital and relationship status, household income, general health, children, single parent status, region, age and age squared at baseline. Derivations/descriptive statistics are given in Web Appendix S5.
I don’t have a strong view on what effect the potential selection effect would point.
Thanks John. It’s great that hey adjusted for confounders (and any similar study would). That such a lot of controlling for confounders needed to be done at all shows the major weakness in this kind of study design.
I’m not saying it’s a bad study, just that it’s not fit for analysis of causation.
I think you would struggle to find many (if any) researchers who would say this study provided any more than a decent correlation between drinking and increased happiness, rather than evidence of causation. Happy to be proved wrong here and others can feel free to weigh in!
Along those lines it was mainly this comment I disagreed with.
“Because the event data is so fine-grained and there are so many contiguous datapoints, it provides very good evidence of causation.”
Another thought—I think there is a risk of overcontrolling with some of the controls used in that paper. The controls in effect assume that if people were not drinking, they would do the same thing they were in fact doing while drinking, except without drinking. But drinking might lead people to be more likely to spend time with friends, go dancing etc. If you control for what people were doing and who they were doing it with, you assume that if they weren’t drinking, people would see their friends and go dancing sober. I think this is unlikely. I don’t like dancing, no matter what I have imbibed, but I think most people would basically never go dancing with friends if they didn’t drink. The uncontrolled effect on SWB is 10 points, though some of the controls seems sensible, so that probably overstates it.
Yeah that’s fair enough re that part of the comment.
Yeah I suppose I would disagree with how a lot of researchers view the strength of evidence provided by cross-sectional studies. I think a lot of researchers seem to endorse the proposition ‘if this could be confounded, it provides no evidence of causation’, which I don’t think is right. It depends on one’s prior on how plausible the confounder is. I think this is why a lot of economics has stopped trying to focus on some of the more important macro questions, and I think this is a mistake.
eg consider the potential effects of climate change on economic performance. I do think cross-sectional evidence is highly relevant and should update one’s view. If economic performance were very strongly climatically determined, I would expect this to show up strongly in the cross-section. I wouldn’t expect to see California being way richer than Baja California. I wouldn’t expect gross state product for US states to look like this as a function of state average temperature:
I would expect growth rates to be uniformly low in climatically exposed places like Vietnam, Bangladesh, Indonesia, India etc, which is not what we see. So, I do think this sort of evidence should update one’s view, even though there are obviously loads of potential confounders.
In climate economics, people don’t like this, so they have started using panel data approaches which aim to test the exogenous effects of weather changes on economic performance in particular periods of time. This supposedly provides better evidence of causation, but I think should be completely ignored because of huge researcher degrees of freedom, reporting bias and political bias. I think they leave the door open for econometric skullduggery to provide inflated estimates. In part because the cross-sectional evidence is more transparent, I think it is more reliable.
I’m not sure the phone study has the traditional weaknesses as cross-sectional studies. It’s a bit more like a panel study where you can track in very fine-grained detail what events are happening and what happens to subjective wellbeing at the time. Because the event data is so fine-grained and there are so many contiguous datapoints, it provides very good evidence of causation. These sorts of studies also give intuitively plausible results for all other events. People don’t like work, getting divorced, being unemployed, being widowed; people like sex, seeing their friends etc.
It’s true that people opt in, but I don’t see any particular reasons to think that this would have a bias towards ‘happy drinkers’. The same is true for other life events. Like maybe there is some bias such that people who enjoy sex are more likely to opt in to phone-based subjective wellbeing studies, but I don’t think that is what is driving the results.
I’m surprised you think it provides “good evidence of causation”. Having fine grained data and many datapoints doesn’t as far as I can tell do anything to counter selection bias and confounding. Usually these kind of studies would not even claim that themselves. I’m going to have to read the study properly now rather than just skimming lol.
How do you get away from the confounding? People drink at social events with friends, people drink in the evening when they are relaxing anyway. Are people drinking because they are happy it happy because they are drinking?
And self selection seems like a pretty massive deal why do you think it isn’t? Seems likely to heavily select for “happy” drinkers f again I could be missing something.
Re confounding, the headline estimate that James uses is adjusted for various potential confounders.
I don’t have a strong view on what effect the potential selection effect would point.
Thanks John. It’s great that hey adjusted for confounders (and any similar study would). That such a lot of controlling for confounders needed to be done at all shows the major weakness in this kind of study design.
I’m not saying it’s a bad study, just that it’s not fit for analysis of causation.
I think you would struggle to find many (if any) researchers who would say this study provided any more than a decent correlation between drinking and increased happiness, rather than evidence of causation. Happy to be proved wrong here and others can feel free to weigh in!
Along those lines it was mainly this comment I disagreed with.
“Because the event data is so fine-grained and there are so many contiguous datapoints, it provides very good evidence of causation.”
Another thought—I think there is a risk of overcontrolling with some of the controls used in that paper. The controls in effect assume that if people were not drinking, they would do the same thing they were in fact doing while drinking, except without drinking. But drinking might lead people to be more likely to spend time with friends, go dancing etc. If you control for what people were doing and who they were doing it with, you assume that if they weren’t drinking, people would see their friends and go dancing sober. I think this is unlikely. I don’t like dancing, no matter what I have imbibed, but I think most people would basically never go dancing with friends if they didn’t drink. The uncontrolled effect on SWB is 10 points, though some of the controls seems sensible, so that probably overstates it.
Yeah that’s fair enough re that part of the comment.
Yeah I suppose I would disagree with how a lot of researchers view the strength of evidence provided by cross-sectional studies. I think a lot of researchers seem to endorse the proposition ‘if this could be confounded, it provides no evidence of causation’, which I don’t think is right. It depends on one’s prior on how plausible the confounder is. I think this is why a lot of economics has stopped trying to focus on some of the more important macro questions, and I think this is a mistake.
eg consider the potential effects of climate change on economic performance. I do think cross-sectional evidence is highly relevant and should update one’s view. If economic performance were very strongly climatically determined, I would expect this to show up strongly in the cross-section. I wouldn’t expect to see California being way richer than Baja California. I wouldn’t expect gross state product for US states to look like this as a function of state average temperature:
I would expect growth rates to be uniformly low in climatically exposed places like Vietnam, Bangladesh, Indonesia, India etc, which is not what we see. So, I do think this sort of evidence should update one’s view, even though there are obviously loads of potential confounders.
In climate economics, people don’t like this, so they have started using panel data approaches which aim to test the exogenous effects of weather changes on economic performance in particular periods of time. This supposedly provides better evidence of causation, but I think should be completely ignored because of huge researcher degrees of freedom, reporting bias and political bias. I think they leave the door open for econometric skullduggery to provide inflated estimates. In part because the cross-sectional evidence is more transparent, I think it is more reliable.