ryancbriggs
The Capability Approach to Human Welfare
Results of a survey of international development professors on EA
There is little (good) evidence that aid systematically harms political institutions
The AI Messiah
Seeking important GH or IDEV working papers to evaluate
I expected you to be right, but when I looked on the 80k job board right now of the 962 roles: 161 were in AI, 105 were in pandemics, and 308 were in global health and development. Hard to say exactly how that relates to funding, but regardless I think it shows development is also a major area of focus when measured by jobs instead of dollars.
I think that longtermism has grown very dramatically, but that it is wrong to equate it with EA (both as a matter of accurate description and for strategic reasons, as are nicely laid out in the above post).
I think the confusion here exists in part because the “EA vanguard” has been quite taken up with longtermism and this has led to people seeing it as more prominent in EA than it actually is. If you look to organizations like The Life You Can Save or Giving What We Can, they either lead with “global health and wellbeing”-type cause areas or focus on that exclusively. I don’t mean to say that this is good or bad, just that EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys.
Personally, I think OpenPhil’s worldview diversification is as good an intellectual frame for holding all this together as I’ve seen. We all get off the “crazy train” at some point, and those who think they’ll be hardcore and bite all bullets eventually hit something like this.
Thank you for this Michael. I don’t think I agree with the market metaphor, but I do think that EA is “letting this crisis go to waste” and that that is unfortunate. I’m glad you’re drawing attention to it.
My thoughts are not well-formed, but I agree that the current setup—while it makes sense historically—is not well suited for the present. Like you, I think that it would be beneficial to have more of a separation between object-level organizations focusing on specific cause areas and central organizations that are basically public goods providers for object-level organizations. This will inevitably get complicated on the margins (e.g. central orgs would likely also focus on movement building and that will inevitably involve some opinionated choices re: cause areas), but I think that’s less of an issue and still an improvement on the present.
“Making every dollar count,” EA-related episode of In Pursuit of Development (podcast)
I strongly agree with your main point on uncertainty, and I’ll defer to you on the (lack of) consensus among happiness researchers on the question of whether or not life is getting better for humans given their paradigm.
However, I think one can easily ground out the statement “There’s compelling evidence that life has gotten better for humans recently” in ways that do not involve subjective wellbeing and if one does so then the statement is quite defensible.
Thanks for these questions.
I think that there are two main points where we disagree: first on paternalism and second on prioritizing mental states. I don’t expect I will convince you, or vice versa, but I hope that a reply is useful for the sake of other readers.
On paternalism, what makes the capability approach anti-paternalistic is that the aim is to give people options, from which they can then do whatever they want. Somewhat loosely (see fn1 and discussion in text), for an EA the capability approach means trying to max their choices. If instead one decides to try to max any specific functioning, like happiness, then you are being paternalistic as you have decided for them what matters. Now you correctly noted that I said that in practice I think increasing income is useful. Importantly, this is not because “being rich” is a key functioning. It is because for poor people income is a factor limiting their ability to do very many things, so increasing their income increases their capabilities quite a bit. The same thing clearly applies to not dying. Perhaps of interest to HLI, I can believe that not being depressed or not having other serious mental conditions is also a very important functioning that unlocks many capabilities.
You wrote that “We can look at subjective wellbeing data from longitudinal studies to identify which capabilities have the most impact on people’s lives.” Putting aside gigantic causal inference questions, which matter, you still cannot identify “which capabilities have the most impact on people’s lives”. At best, you will identify which functionings cause increases in your measured DV, which would be something like a happiness scale. To me, this is an impoverished view of what matters to people’s lives. I will note that you did not respond to my point about getting an AI to maximize happiness or to the point that many people, such as many religious people, will just straight tell you they aren’t trying to maximize their own happiness. I think these arguments makes the point that happiness is important, but it is not the one thing that we all care about.
On purely prioritizing mental states, I think it is a mistake to prioritize “an unhappy billionaire over a happy rural farmer.” I think happiness as the one master metric breaks in all sorts of real-life cases, such as the one that I gave of women in the 1970s. Rather than give more cases, which might at this point be tedious, I think we can productively relate this point back to paternalism. I think if we polled American women and asked if they would want to go back to the social world of the 1970s—when they were on average happier—they would overwhelmingly say no. I think this is because they value the freedoms they gained from the 1970s forward. If I am right that they would not want to go back to the 1970s, then to say that they are mistaken and that life for American women was better in the 1970s is, again, to me paternalistic.
Finally, I should also say thank you for engaging on this. I think the topic is important and I appreciate the questions and criticisms.
[Question] Recipe book recommendations for EAs
Good questions.
I tried to address the fist one in the second part of the Downsides section. It is indeed the case that while the list of capability sets available to you is objective, your personal ranking of them is subjective and the weights can vary quite a bit. I don’t think this problem is worse than the problems other theories face (turns out adding up utility is hard), but it is a problem. I don’t want to repeat myself too much, but you can respond to this by trying to make a minimal list of capabilities that we all value highly (Nussbaum), or you can try to be very contextual (within a society or subgroup of a society, the weights may not be so different), or you can try to find minimal things that unlock lots of capabilities (like income or staying alive). There may be other things one can do too. I’d say more research here could be very useful. This approach is very young.
Re: actually satisfying preferences, if my examples about the kid growing up to be a doctor or the option to walk around at night don’t speak to you, then perhaps we just have different intuitions. One thing I will say on this is that you might think that your preferences are satisfied if the set of options is small (you’ll always have a top choice, and you might even feel quite good about it), but if the set grows you might realize that the old thing you were satisfied with is no longer what you want. You’ll only realize this if we keep increasing the capability sets you can pick from, so it does seem to me that it is useful to try to maximize the number of (value-weighted) capability sets available to people.
I appreciate the pushback. I’m thinking of all claims that go roughly like this: “a god-like creature is coming, possibly quite soon. If we do the right things before it arrives, we will experience heaven on earth. If we do not, we will perish.” This is narrower than “all transformative change” but broader than something that conditions on a specific kind of technology. To me personally, this feels like the natural opening position when considering concerns about AGI.
I think we probably agree that claims of this type are rarely correct, and I understand that some people have inside view evidence that sways them towards still believing the claim. That’s totally okay. My goal was not to try to dissuade people from believing that AGI poses a possibly large risk to humanity, it was to point to the degree to which this kind of claim is messianic. I find that interesting. At minimum, people who care a lot about AGI risk might benefit from realizing that at least some people view them as making messianic claims.
Thank you for this. I might have more to say later when I read all this more carefully, but I couldn’t find either a forest plot or a funnel plot from the meta-analysis in the report (sorry if I missed it). Could you share those or point me to where they exist? They’re both useful for understanding what is going on in the data.
- 16 Jul 2023 19:03 UTC; 74 points) 's comment on The Happier Lives Institute is funding constrained and needs you! by (
I agree. I think 80k et al pushing longtermist philosophy hard was a mistake. It clearly turns some people off and it seems most actual longtermist projects (eg around pandemics or AI) are justifiable without any longtermist baggage.
Thank you for sharing these Joel. You’ve got a lot going on in the comments here, so I’m going only make a few brief specific comments and one larger one. The larger one relates to something you’ve noted elsewhere in the thread, which is:
“That the quality of this analysis was an attempt to be more rigorous than most shallow EA analyses, but definitely less rigorous than an quality peer reviewed academic paper. I think this [...] is not something we clearly communicated.”
This work forms part of the evidence base behind some strong claims from HLI about where to give money, so I did expect it to be more rigorous. I wondered if I was alone in being surprised here, so I did a very informal (n = 23!) Twitter poll in the EA group asking about what people expected re: the rigor of evidence for charity recommendations. (I fixed my stupid Our World in Data autocorrect glitch in a follow up tweet).
I don’t want to lean on this too much, but I do think it suggests that I’m not alone in expecting a higher degree of rigor when it comes to where to put charity dollars. This is perhaps mostly a communication issue, but I also think that as quality of analysis and evidence becomes less rigorous then claims should be toned down or at least the uncertainty (in the broad sense) needs to be more strongly expressed.
On the specifics, first, I appreciate you noting the apparent publication bias. That’s both important and not great.
Second, I think comparing the cash transfer funnel plot to the other one is informative. The cash transfer one looks “right”. It has the correct shape and it’s comforting to see the Egger regression line is basically zero. This is definitely not the case with the StrongMinds MA. The funnel plot looks incredibly weird, which could be heterogeneity that we can model but should regardless make everyone skeptical because doing that kind of modelling well is very hard. It’s also rough to see that if we project the Egger regression line back to the origin then the predicted effect when the SE is zero is basically zero. In other words, unwinding publication bias in this way would lead us to guess at a true effect of around nothing. Do I believe that? I’m not sure. There are good reasons to be skeptical of Egger-type regressions, but all of this definitely increases my skepticism of the results. While I’m glad it’s public now, I don’t feel great that this wasn’t part of the very public first cut of the results.
Again, I appreciate you responding. I do think going forward it would be worth taking seriously community expectations about what underlies charity recommendations, and if something is tentative or rough then I hope that it gets clearly communicated as such, both originally and in downstream uses.
This is why we like to see these plots! Thank you Gregory, though this should not have been on you to do.
Having results like this underpin a charity recommendation and not showing it all transparently is a bad look for HLI. Hopefully there has been a mistake in your attempted replication and that explains e.g. the funnel plot. I look forward to reading the responses to your questions to Joel.
I will probably have longer comments later, but just on the fixed effects point, I feel it’s important to clarify that they are sometimes used in this kind of situation (when one fears publication bias or small study-type effects). For example, here is a slide deck from a paper presentation with three *highly* qualified co-authors. Slide 8 reads:
To be conservative, we use ‘fixed-effect’ MA or our new unrestricted WLS—Stanley and Doucouliagos (2015)
Not random-effects or the simple average: both are much more biased if there is publication bias (PB).
Fixed-effect (WLS-FE) is also biased with PB, but less so; thus will over-estimate the power of economic estimates.
This is basically also my take away. In the presence of publication bias or these small-study type effects, random effects “are much more biased” while fixed effects are “also biased [...] but less so.” Perhaps there are some disciplinary differences going on here, but what I’m saying is a reasonable position in political science, and Stanley and Doucouliagos are economists, and Ioannidis is in medicine, so using fixed effects in this context is not some weird fringe position.
--
(disclosure: I have a paper under review where Stanley and Doucouliagos are co-authors)
I think this is one of those posts where the question is ultimately more valuable than the answer. And to be clear that isn’t a criticism and I upvoted the post. I appreciate posts that push people to think about important questions, even if our best guess answers are not currently very compelling.