I don’t see how it makes sense to anyone as a practical pursuit.
GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.
That’s because to its nature, based on the overall soundness in the fundamental assumptions behind x-risk, its basically binary whether it is or isn’t the top priority.
Pretty strongly disagree with this. I think there’s a strong case for x-risk being a priority cause area, but I don’t think it dominates all other contenders. (More on this here.)
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.
Givewell’s and Open Phil’s worked wasn’t termed ‘Cause X,’ but I think a lot of the stuff you’re pointing to would’ve started before ‘Cause X’ was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:
institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
small, private non-profit organizations like Rethink Priorities.
Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn’t know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.
GiveWell & Open Phil have at times undertaken systematic reviews of plausible cause areas; their general framework for this seems quite practical.
Pretty strongly disagree with this. I think there’s a strong case for x-risk being a priority cause area, but I don’t think it dominates all other contenders. (More on this here.)
The concerns you raise in your linked post are actually concerns a lot of other people have cited I have in mind for why they don’t currently prioritize AI alignment, existential risk reduction, or the long-term future. Most EAs I’ve talked to who don’t share those priorities say they’d be open to shifting their priorities in that direction in the future, but currently they have unresolved issues with the level of uncertainty and speculation in these fields. Notably, EA is now focusing more and more effort on the source of unresolved concerns with existential risk reduction, such as demonstrated ability to predict the long-term future. That work is only beginning though.
Givewell’s and Open Phil’s worked wasn’t termed ‘Cause X,’ but I think a lot of the stuff you’re pointing to would’ve started before ‘Cause X’ was a common term in EA. They definitely qualify. One thing is Givewell and Open Phil are much bigger organizations than most in EA, so they are unusually able to pursue these things. So my contention that this kind of research is impractical for most organizations to do still holds up. It may be falsified in the near future though. Aside from Givewell and Open Phil, the organizations that can permanently focus on cause prioritization are:
institutes at public universities with large endowments, like the Future of Humanity Institute and the Global Priorities Institute at Oxford University.
small, private non-profit organizations like Rethink Priorities.
Honestly, I am impressed and pleasantly surprised organizations like Rethink Priorities can go from a small team to a growing organization in EA. Cause prioritization is such a niche cause unique to EA, I didn’t know if there was hope for it to keep sustainably growing. So far, the growth of the field has proven sustainable. I hope it keeps up.