I think there are also good worldview-based explanations for why these causes should have been easy to discover and should remain among the main causes:
The interventions that are most cost-effective with respect to outcomes measured with RCTs (for humans) are GiveWell charity interventions. Also, for human welfare, your dollar tends to go further in developing countries, because wealthier countries spend more on health and consumption (individually and at the government level) and so already pick the lowest hanging fruit.
If you don’t require RCTs or even formal rigorous studies, but still expect feedback on outcomes close to your outcomes of interest or remain averse to putting everything into a single one-shot (described in 3), you get high-leverage policy and R&D interventions beating GiveWell charities. Corporate and institutional farmed animal interventions will also beat GiveWell charities, if you also grant substantial moral weight to nonhuman animals.
If you aren’t averse to allocating almost everything into shifting the distribution of a basically binary outcome like extinction (one-shotting) with very low probability, and you just take expected values through and weaken your standards of evidence even more (basically no direct feedback on the primary outcomes of interest), you get some x-risk and global catastrophic risk interventions beating GiveWell charities, and if you don’t discount moral patients in the far future or don’t care much about nonhuman animals, they can beat all animal interventions. AI risk stands out as by far the most likely and most neglected such risk to many in our community. (There are some subtleties I’m neglecting.)
I think there are also good worldview-based explanations for why these causes should have been easy to discover and should remain among the main causes:
The interventions that are most cost-effective with respect to outcomes measured with RCTs (for humans) are GiveWell charity interventions. Also, for human welfare, your dollar tends to go further in developing countries, because wealthier countries spend more on health and consumption (individually and at the government level) and so already pick the lowest hanging fruit.
If you don’t require RCTs or even formal rigorous studies, but still expect feedback on outcomes close to your outcomes of interest or remain averse to putting everything into a single one-shot (described in 3), you get high-leverage policy and R&D interventions beating GiveWell charities. Corporate and institutional farmed animal interventions will also beat GiveWell charities, if you also grant substantial moral weight to nonhuman animals.
If you aren’t averse to allocating almost everything into shifting the distribution of a basically binary outcome like extinction (one-shotting) with very low probability, and you just take expected values through and weaken your standards of evidence even more (basically no direct feedback on the primary outcomes of interest), you get some x-risk and global catastrophic risk interventions beating GiveWell charities, and if you don’t discount moral patients in the far future or don’t care much about nonhuman animals, they can beat all animal interventions. AI risk stands out as by far the most likely and most neglected such risk to many in our community. (There are some subtleties I’m neglecting.)