Picking just option 1 or just option 2 means causing net harm on either A or B
It isn’t obvious to me this is relevant. In your example I suspect I would be indifferent between putting everything towards option 1, putting everything towards option 2, or any mix between the two.
I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
I can make it a bit worse by making the numbers more similar:
(1.1, −1)
(-1, 1.1)
(0, 0)
Picking only 1 does about as much harm on B as good 2 would do, and picking only 2 does about as much harm on A as good 1 would do. It seems pretty bad/unfair to me to screw over the other view this way, and a mixed strategy just seems better, unless you have justified intertheoretic comparisons.
Also, you might be assuming that the plausible intertheoretic comparisons all agree that 1 is better than 2, or all agree that 2 is better than 1. If there’s disagreement, you need a way to resolve that. And, I think you should just give substantial weight to the possibility that no intertheoretic comparisons are right in many cases, so that 1 and 2 are just incomparable. OTOH, while they might avoid these problems, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.
I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
Ah right. Yeah I’m not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I’d rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So “pick a single option that works somewhat well under multiple moral views that I have credence in” is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.
It isn’t obvious to me this is relevant. In your example I suspect I would be indifferent between putting everything towards option 1, putting everything towards option 2, or any mix between the two.
I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
I can make it a bit worse by making the numbers more similar:
(1.1, −1)
(-1, 1.1)
(0, 0)
Picking only 1 does about as much harm on B as good 2 would do, and picking only 2 does about as much harm on A as good 1 would do. It seems pretty bad/unfair to me to screw over the other view this way, and a mixed strategy just seems better, unless you have justified intertheoretic comparisons.
Also, you might be assuming that the plausible intertheoretic comparisons all agree that 1 is better than 2, or all agree that 2 is better than 1. If there’s disagreement, you need a way to resolve that. And, I think you should just give substantial weight to the possibility that no intertheoretic comparisons are right in many cases, so that 1 and 2 are just incomparable. OTOH, while they might avoid these problems, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.
Ah right. Yeah I’m not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I’d rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So “pick a single option that works somewhat well under multiple moral views that I have credence in” is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.
Ya, I think it can be an okay heuristic.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.