Congratulations on launching your new organisation!
When I read your post I realised that I was confused by a few things:
(A) It seems like you think that there hasn’t been enough optimisation pressure going into the causes that EA is currently focussed on (and possibly that ‘systematic research’ is the only/best way to get sufficient levels of optimisation.
EA’s three big causes (i.e. global health, animal welfare and AI risk) were not chosen by systematic research, but by historical happenstance (e.g. Peter Singer being a strong supporter of animal rights, or the Future of Humanity Institute influencing the early EA movement in Oxford).
I think this is probably wrong for a few reasons 1. There are quite a few examples of people switching between cause areas (e.g. Holden, Will M, Toby Ord moving from GHD to Longtermism). Also, organisations seem to have historically done a decent amount of pivoting (GiveWell → GiveWell Labs/ Open Phil, 80k spinning out ACE …).
2. Finding cause x has been a meme for a pretty long time and I think looking for new cause/project etc. has been pretty baked in to EA since the start. I think we just haven’t found better things because the things we currently have are very good according to sone worldview.
3. My impression is that many EAs (particularly EAs that are highly involved) have done cause prioritisation themselves. Maybe not to the rigour that you would like but my sense is that many community members doing this work themselves and then doing some aggregation by looking at what people end up doing gives some data (although I agree it’s not perfect). To some degree cause exploration happens by default in EA.
(B) I am also a bit confused why the goal or proxy goal is find a cause every 3 years? Is it 3 rather than 1 or 6 due to resource constraints or is this number mostly determined by some a priori sense of how many causes their ‘should’ be.
(C) Minor: You said that EAs big 3 cause areas are global health, animal welfare and AI risk. I am not sure what the natural way or carving up the cause area space is but I’d guess that Bio security should also be on this list. Maybe something pointing at meta EA depending on what you think of as a ‘cause’.
I think there are also good worldview-based explanations for why these causes should have been easy to discover and should remain among the main causes:
The interventions that are most cost-effective with respect to outcomes measured with RCTs (for humans) are GiveWell charity interventions. Also, for human welfare, your dollar tends to go further in developing countries, because wealthier countries spend more on health and consumption (individually and at the government level) and so already pick the lowest hanging fruit.
If you don’t require RCTs or even formal rigorous studies, but still expect feedback on outcomes close to your outcomes of interest or remain averse to putting everything into a single one-shot (described in 3), you get high-leverage policy and R&D interventions beating GiveWell charities. Corporate and institutional farmed animal interventions will also beat GiveWell charities, if you also grant substantial moral weight to nonhuman animals.
If you aren’t averse to allocating almost everything into shifting the distribution of a basically binary outcome like extinction (one-shotting) with very low probability, and you just take expected values through and weaken your standards of evidence even more (basically no direct feedback on the primary outcomes of interest), you get some x-risk and global catastrophic risk interventions beating GiveWell charities, and if you don’t discount moral patients in the far future or don’t care much about nonhuman animals, they can beat all animal interventions. AI risk stands out as by far the most likely and most neglected such risk to many in our community. (There are some subtleties I’m neglecting.)
(a) Agreed that there is a lot of research being down, and I think my main concern (and CE’s too, I understand, though I won’t speak for Joey and his team on this) is the issue of systematicity—causes can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesn’t deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.
My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.
I understand that GWWC is looking to do evaluation of evaluators (i.e. GiveWell, FP, CE etc) and in many ways, maybe that’s far more feasible in terms of providing the EA community with systematic, comparative results—if you get a sense of how much more optimistic/pessimistic various evaluators are, you can penalize their individual cause/intervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/assumptions are used.
(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely don’t have a good/strong sense of how long it’ll take, so it’s probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be - the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but it’s so hard to say.
(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!
Congratulations on launching your new organisation!
When I read your post I realised that I was confused by a few things:
(A) It seems like you think that there hasn’t been enough optimisation pressure going into the causes that EA is currently focussed on (and possibly that ‘systematic research’ is the only/best way to get sufficient levels of optimisation.
I think this is probably wrong for a few reasons
1. There are quite a few examples of people switching between cause areas (e.g. Holden, Will M, Toby Ord moving from GHD to Longtermism). Also, organisations seem to have historically done a decent amount of pivoting (GiveWell → GiveWell Labs/ Open Phil, 80k spinning out ACE …).
2. Finding cause x has been a meme for a pretty long time and I think looking for new cause/project etc. has been pretty baked in to EA since the start. I think we just haven’t found better things because the things we currently have are very good according to sone worldview.
3. My impression is that many EAs (particularly EAs that are highly involved) have done cause prioritisation themselves. Maybe not to the rigour that you would like but my sense is that many community members doing this work themselves and then doing some aggregation by looking at what people end up doing gives some data (although I agree it’s not perfect). To some degree cause exploration happens by default in EA.
(B) I am also a bit confused why the goal or proxy goal is find a cause every 3 years? Is it 3 rather than 1 or 6 due to resource constraints or is this number mostly determined by some a priori sense of how many causes their ‘should’ be.
(C) Minor: You said that EAs big 3 cause areas are global health, animal welfare and AI risk. I am not sure what the natural way or carving up the cause area space is but I’d guess that Bio security should also be on this list. Maybe something pointing at meta EA depending on what you think of as a ‘cause’.
I think there are also good worldview-based explanations for why these causes should have been easy to discover and should remain among the main causes:
The interventions that are most cost-effective with respect to outcomes measured with RCTs (for humans) are GiveWell charity interventions. Also, for human welfare, your dollar tends to go further in developing countries, because wealthier countries spend more on health and consumption (individually and at the government level) and so already pick the lowest hanging fruit.
If you don’t require RCTs or even formal rigorous studies, but still expect feedback on outcomes close to your outcomes of interest or remain averse to putting everything into a single one-shot (described in 3), you get high-leverage policy and R&D interventions beating GiveWell charities. Corporate and institutional farmed animal interventions will also beat GiveWell charities, if you also grant substantial moral weight to nonhuman animals.
If you aren’t averse to allocating almost everything into shifting the distribution of a basically binary outcome like extinction (one-shotting) with very low probability, and you just take expected values through and weaken your standards of evidence even more (basically no direct feedback on the primary outcomes of interest), you get some x-risk and global catastrophic risk interventions beating GiveWell charities, and if you don’t discount moral patients in the far future or don’t care much about nonhuman animals, they can beat all animal interventions. AI risk stands out as by far the most likely and most neglected such risk to many in our community. (There are some subtleties I’m neglecting.)
Thanks a lot for the feedback!
(a) Agreed that there is a lot of research being down, and I think my main concern (and CE’s too, I understand, though I won’t speak for Joey and his team on this) is the issue of systematicity—causes can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesn’t deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.
My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.
I understand that GWWC is looking to do evaluation of evaluators (i.e. GiveWell, FP, CE etc) and in many ways, maybe that’s far more feasible in terms of providing the EA community with systematic, comparative results—if you get a sense of how much more optimistic/pessimistic various evaluators are, you can penalize their individual cause/intervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/assumptions are used.
(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely don’t have a good/strong sense of how long it’ll take, so it’s probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be - the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but it’s so hard to say.
(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!