Yes, I think the argument would probably hold under MEC (ignoring indirect reasons like those I gave), although I think MEC is a pretty bad approach among alternatives:
It can’t accommodate certain views that just don’t roughly fit in a framework of maximizing expected utility. Most other prominent approaches can.
Intertheoretic comparisons often seem pretty arbitrary, especially with competing options for normalization (although you can normalize using statistical measures instead, like variance voting).
It makes a normative assumption that itself seems plausibly irrational and should be subject to uncertainty, specifically maximizing expected utility with an unbounded utility function. (I suppose there are similar objections to other approaches, and this leads to regress.)
MEC can be pretty “unfair” to views, and, at least with intertheoretic comparisons, is fanatical (and infinities/lexicality should dominate in particular, no matter how unlikely). In principle, it can even allow considerable overall harm on a plurality of your views (including by weight) because views to which you assign very little weight can end up dominating. EDIT: On the other hand, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.
I also think your instinct to look for a single option that does well across views is at odds with most approaches to normative uncertainty in the literature, including MEC, and I think a pretty reasonable requirement for a good approach to normative uncertainty. Suppose you have two moral views, A and B, each with 50% weight, and 3 options with the following moral values per unit of resources, where the first entry of each pair is the moral value under A, and the second is under B (not assuming A and B are using the same moral units here):
(4, −1)
(-1, 4)
(1, 1)
Picking just option 1 or just option 2 means causing net harm on either A or B, but option 3 does well on both A and B. However, picking just option 3 is strictly worse than 50% option 1 + 50% option 2, which has value (1.5, 1.5).
And we shouldn’t be surprised to find ourselves in situations where mixed options beat single options that do well across views, because when you optimize for A, you don’t typically expect this to be worse than what optimization for B can easily make up for, and vice versa. For example, corporate campaigns seem more cost-effective at reducing farmed animal suffering than GiveWell interventions are at causing it, because the former are chosen specifically to minimize farmed animal suffering, while GiveWell interventions are not chosen to maximize farmed animal suffering.
Furthermore, assuming constant marginal returns, MEC would never recommend mixed options (except for indirect reasons), unless the numbers really did line up nicely so that options 1 and 2 had the exact same expected choiceworthiness, and even then, it would be indifferent between pure and mixed options. It would be an extraordinarily unlikely coincidence for two options to have the exact same expected choiceworthiness for a rational Bayesian with precise probabilities.
Picking just option 1 or just option 2 means causing net harm on either A or B
It isn’t obvious to me this is relevant. In your example I suspect I would be indifferent between putting everything towards option 1, putting everything towards option 2, or any mix between the two.
I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
I can make it a bit worse by making the numbers more similar:
(1.1, −1)
(-1, 1.1)
(0, 0)
Picking only 1 does about as much harm on B as good 2 would do, and picking only 2 does about as much harm on A as good 1 would do. It seems pretty bad/unfair to me to screw over the other view this way, and a mixed strategy just seems better, unless you have justified intertheoretic comparisons.
Also, you might be assuming that the plausible intertheoretic comparisons all agree that 1 is better than 2, or all agree that 2 is better than 1. If there’s disagreement, you need a way to resolve that. And, I think you should just give substantial weight to the possibility that no intertheoretic comparisons are right in many cases, so that 1 and 2 are just incomparable. OTOH, while they might avoid these problems, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.
I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
Ah right. Yeah I’m not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I’d rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So “pick a single option that works somewhat well under multiple moral views that I have credence in” is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.
Yes, I think the argument would probably hold under MEC (ignoring indirect reasons like those I gave), although I think MEC is a pretty bad approach among alternatives:
It can’t accommodate certain views that just don’t roughly fit in a framework of maximizing expected utility. Most other prominent approaches can.
Intertheoretic comparisons often seem pretty arbitrary, especially with competing options for normalization (although you can normalize using statistical measures instead, like variance voting).
It makes a normative assumption that itself seems plausibly irrational and should be subject to uncertainty, specifically maximizing expected utility with an unbounded utility function. (I suppose there are similar objections to other approaches, and this leads to regress.)
MEC can be pretty “unfair” to views, and, at least with intertheoretic comparisons, is fanatical (and infinities/lexicality should dominate in particular, no matter how unlikely). In principle, it can even allow considerable overall harm on a plurality of your views (including by weight) because views to which you assign very little weight can end up dominating. EDIT: On the other hand, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.
I also think your instinct to look for a single option that does well across views is at odds with most approaches to normative uncertainty in the literature, including MEC, and I think a pretty reasonable requirement for a good approach to normative uncertainty. Suppose you have two moral views, A and B, each with 50% weight, and 3 options with the following moral values per unit of resources, where the first entry of each pair is the moral value under A, and the second is under B (not assuming A and B are using the same moral units here):
(4, −1)
(-1, 4)
(1, 1)
Picking just option 1 or just option 2 means causing net harm on either A or B, but option 3 does well on both A and B. However, picking just option 3 is strictly worse than 50% option 1 + 50% option 2, which has value (1.5, 1.5).
And we shouldn’t be surprised to find ourselves in situations where mixed options beat single options that do well across views, because when you optimize for A, you don’t typically expect this to be worse than what optimization for B can easily make up for, and vice versa. For example, corporate campaigns seem more cost-effective at reducing farmed animal suffering than GiveWell interventions are at causing it, because the former are chosen specifically to minimize farmed animal suffering, while GiveWell interventions are not chosen to maximize farmed animal suffering.
Furthermore, assuming constant marginal returns, MEC would never recommend mixed options (except for indirect reasons), unless the numbers really did line up nicely so that options 1 and 2 had the exact same expected choiceworthiness, and even then, it would be indifferent between pure and mixed options. It would be an extraordinarily unlikely coincidence for two options to have the exact same expected choiceworthiness for a rational Bayesian with precise probabilities.
It isn’t obvious to me this is relevant. In your example I suspect I would be indifferent between putting everything towards option 1, putting everything towards option 2, or any mix between the two.
I think just picking 1 or 2 conflicts with wanting to “pick a single option that works somewhat well under multiple moral views that I have credence in”.
I can make it a bit worse by making the numbers more similar:
(1.1, −1)
(-1, 1.1)
(0, 0)
Picking only 1 does about as much harm on B as good 2 would do, and picking only 2 does about as much harm on A as good 1 would do. It seems pretty bad/unfair to me to screw over the other view this way, and a mixed strategy just seems better, unless you have justified intertheoretic comparisons.
Also, you might be assuming that the plausible intertheoretic comparisons all agree that 1 is better than 2, or all agree that 2 is better than 1. If there’s disagreement, you need a way to resolve that. And, I think you should just give substantial weight to the possibility that no intertheoretic comparisons are right in many cases, so that 1 and 2 are just incomparable. OTOH, while they might avoid these problems, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.
Ah right. Yeah I’m not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.
For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I’d rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)
So “pick a single option that works somewhat well under multiple moral views that I have credence in” is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.
Ya, I think it can be an okay heuristic.
I guess this is getting pretty specific, but if you thought
some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)
then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.