I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
Ah right. Yeah I’m not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I’d rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So “pick a single option that works somewhat well under multiple moral views that I have credence in” is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.
Ah right. Yeah I’m not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I’d rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So “pick a single option that works somewhat well under multiple moral views that I have credence in” is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.
Ya, I think it can be an okay heuristic.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.