I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.
Ya, I think it can be an okay heuristic.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.