I’m looking forward to reading these critiques! A few thoughts from me on the person-affecting views critique:
Most people, myself included, find existence non-comparativism a bit bonkers. This is because most people accept that if you could create someone who you knew with certainty would live a dreadful life, that you shouldn’t create them, or at least that it would be better if you didn’t (all other things equal). So when you say that existence non-comparativism is highly plausible, I’m not so sure that is true...
Arguing that existence non-comparativism and the person-affecting principle (PAP) are plausible isn’t enough to argue for a person-affecting view (PAV), because many people reject PAVs on account of their unpalatable conclusions (which can signal that underlying motivations for PAVs are flawed). My understanding is that the most common objection of PAVs is that they run into the non-identity problem, implying for example that there’s nothing wrong with climate change and making our planet a hellscape, because this won’t make lives worse for anyone in particular as climate change itself will change the identities of who comes into existence. Most people agree the non-identity problem is just that...a problem, because not caring about climate change seems a bit stupid. This acts against the plausibility of narrow person-affecting views.
Similarly, if we know people are going to exist in the future, it just seems obvious to most that it would be a good thing, as opposed to a neutral thing, to take measures to improve the future (conditional on the fact that people will exist).
It has been that argued that moral uncertainty over population axiology pushes one towards actions endorsed by a total view even if one’s credence in these theories is low. This assumes one uses an expected moral value approach to dealing with moral uncertainty. This would in turn imply that having non-trivial credence in a narrow PAV isn’t really a problem for longtermists. So I think you have to do one of the following:
Argue why this Greaves/Ord paper has flawed reasoning
Argue that we can have zero or virtually zero credence in total views
Argue why an expected moral value approach isn’t appropriate for dealing with moral uncertainty (this is probably your best shot...)
Also maximizing expected choice-worthiness with intertheoretic comparisons can lead to fanaticism focusing on quantum branching actually increasing the number of distinct moral patients (rather aggregating over the quantum measure and effectively normalizing), and that can have important consequences. See this discussion and my comment.
Argue that we can have zero or virtually zero credence in total views
FWIW, I’ve comprehensively done this in my moral anti-realism sequence. In the post Moral Realism and Moral Uncertainty Are in Tension, I argue that you cannot be morally uncertain and a confident moral realist. Then, in The “Moral Uncertainty” Rabbit Hole, Fully Excavated, I explain how moral uncertainty works if it comes with metaethical uncertainty and I discuss wagers in favor of moral realism and conditions where they work and where they fail. (I posted the latter post on April 1st thinking people would find it a welcome distraction to read something serious next to all the silly posts, but it got hardly any views, sadly.) The post ends with a list of pros and cons for “good vs. bad reasons for deferring to (more) moral reflection.” I’ll link to that section here because it summarizes under which circumstances you can place zero or virtually zero credence in some view that other sophisticated reasoners consider appealing.
On 3, I actually haven’t read the paper yet, so should probably do that, but I have a few objections:
Intertheoretic comparisons seem pretty arbitrary and unjustified. Why should there be any fact of the matter about them? If you choose some values to identify across different theories, you have to rule out alternative choices.
The kind of argument they use would probably support widespread value lexicality over a continuous total view. Consider lexical threshold total utilitarianism with multiple thresholds. For any such view (including total utilitarianism without lexical thresholds), if you add a(nother) greater threshold past the others and normalize by values closer to 0 than the new threshold, then the new view and things past the threshold will dominate the previous one view and things closer to 0, respectively. I think views like maximin/leximin and maximax/leximax would dominate all forms of utilitarianism, including lexical threshold utilitarianism, because they’re effectively lexical threshold utilitarianim with lexical thresholds at every welfare level.
Unbounded utility functions, like risk-neutral expected value maximizing total utilitarianism, are vulnerable to Dutch books and money pumps, and violate the sure-thing principle, due to finite-valued lotteries with infinite or undefined expectations, like St. Petersburg lotteries. See, e.g. Paul Christiano’s comment here: https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=hrsLNxxhsXGRH9SRx So, if we think it’s rationally required to avoid Dutch books or money pumps in principle, or satisfy the sure-thing principle, and finite value but infinite expectated value lotteries can’t be ruled out with certainty, then risk-neutral EV-maximizing total utilitarianism is ruled out.
When it comes to comparisons of values between PAVs and total views I don’t really see much of a problem as I’m not sure the comparison is actually inter-theoretic. Both PAVs and total views are additive, consequentialist views in which welfare is what has intrinsic value. It’s just the case that some things count under a total view that don’t under (many) PAVs i.e. the value of a new life. So accounting for both PAVs and a total view in a moral uncertainty framework doesn’t seem too much of a problem to me.
What about genuine inter-theoretic comparisons e.g. between deontology and consequentialism? Here I’m less sure but generally I’m inclined to say there still isn’t a big issue. Instead of choosing specific values, we can choose ‘categories’ of value. Consider a meteor hurtling to earth destined to wipe us all out. Under a total view we might say it would be “astronomically bad” to let the meteor wipe us out. Under a deontological view we might say it is “neutral” as we aren’t actually doing anything wrong by letting the meteor wipe us out (if you have a view that invokes an act/omission distinction). So what I’m doing here is assigning categories such as “astronomically bad”, “very bad”, “bad”, “neutral”, “good” etc. to acts under different ethical views—which seems easy enough. We can then use these categories in our moral uncertainty reasoning. This doesn’t seem that arbitrary to me, although I accept it may still run into issues.
PAVs and total views are different theories, so the comparisons are intertheoretic, by definition. Even if they agree on many rankings (in fixed population cases, say), they do so for different reasons. The value being compared is actually of a different kind, as total utilitarian value is non-comparative, but PA value is comparative.
So what I’m doing here is assigning categories such as “astronomically bad”, “very bad”, “bad”, “neutral”, “good” etc. to acts under different ethical views—which seems easy enough.
These vague categories might be useful and they do seem kind of intuitive to me, but
“Astronomically bad” effectively references the size of an affected population and hints at aggregation, so I’m not sure it’s a valid category at all for intertheoretic comparisons. Astronomically bad things are also not consistently worse than things that are not astronomically bad under all views, especially lexical views and some deontological views. You can have something which is astronomically bad on leximin (or another lexical view) due to an astronomically large (sub)population made worse off, but which is dominated by effects limited to a small (sub)population in another outcome that’s not astronomically bad. Astronomically bad might still be okay to use for person-affecting utilitarianism (PAU) vs total utilitarianism, though.
“Infinitely bad” (or “infinitely bad of a certain cardinality”) could be used to a similar effect, making lexical views dominate over classical utilitarianism (unless you use lexically “amplified” versions of classical utilitarianism, too). Things can break down if we have infinitely many different lexical thresholds, though, since there might not be a common scale to put them on if the thresholds’ orders are incompatible, but if we allow pairwise comparisons at least where there are only finitely many thresholds, we’d still have classical utilitarianism dominated by lexical threshold utilitarian views with finitely many lexical thresholds, and when considering them all together, this (I would guess) effectively gives us leximin, anyway.
These kinds of intuitive vague categories aren’t precise enough to fix exactly one normalization for each theory for the purpose of maximizing some kind of expected value over and across theories, and the results will be sensitive to which normalizations are chosen, which will also be basically somewhat arbitrary. If you used precise categories, you’d still have arbitrariness to deal with in assigning to categories on each view.
Comparisons between theories A and B, theories B and C and theories A and C might not be consistent with each other, unless you find a single common scale for all three theories. This limits what kinds of categories you can use to those that are universally applicable if you want to take expected values across all theories at once. You also still need the categories and the theories to be basically roughly cardinally (ratio scale) interpretable to use expected values across theories with intertheoretic comparisons, but some theories are not cardinally interpretable at all.
Vague categories like “very bad” that don’t reference objective cardinal numbers (even imprecisely) will probably not be scope-sensitive in a way that makes the total view dominate over PAVs. On a PAV according to which death is bad, killing 50% of people would plausibly hit the highest category, or near it. The gaps between the categories won’t be clear or even necessarily consistent across theories. So, I think you really need to reference cardinal numbers in these categories if you want the total view to dominate PAVs with this kind of approach.
Expected values don’t even make sense on some theories, those which are not cardinally interpretable, so it’s weird to entertain such theories and therefore the possibility that expected value reasoning is wrong, and then force them into an expected value framework anyway. If you entertain the possibility of expected value reasoning being wrong at the normative level, you should probably do so for handling moral uncertainty, too.
Some comparisons really seem to be pretty arbitrary. Consider weak negative hedonistic total utilitarianism vs classical utilitarianism, where under the weak NU view, pleasure matters 1/X times as much as suffering, or suffering matters X times more than pleasure. There are at least two possible normalizations here: a. suffering matters equally on each view, but pleasure matters X times less on weak NU view than on CU, and b. pleasure matters equally on each view, but suffering matters X times more on the weak NU view relative to pleasure on each view. When X is large enough, the vague intuitive categories probably won’t work, and you need some way to resolve this problem. If you include both comparisons, then you’re effectively splitting one of the views into two with different cardinal strengths. To me, this undermines intertheoretic comparisons if you have two different views which make exactly the same recommendations and for (basically) the same reasons, but have different cardinal strengths. Where do these differences in cardinal strengths come from? MacAskill, Bykvist and Ord call these “amplifications” of theories in their book, and I think suggest that they will come from some universal absolute scale common across theories (chapter 6 , section VII), but they don’t explain where this scale actually comes from.
My understanding is that those who support such intertheoretic comparisons only do so in limited cases anyway and so would want to combine them with another approach where intertheoretic comparisons aren’t justified. My impression is also that using intertheoretic comparisons but saying nothing when intertheoretic comparisons aren’t justified is the least general/applicable approach of those typically discussed, because it requires ratio-scale comparisons. You can use variance voting with interval-scale comparisons, and you can basically always use moral parliament or “my favourite theory”.
Some of the above objections are similar to those in this chapter by MacAskill, Bykvist and Ord, and the book generally.
About the non-identity problem: Arden Koehler wrote a review a while ago about a paper that attempts to solve it (and other problems) for person-affecting views. I don’t remember if I read the review to the end, but the idea is interesting.
About the correct way to deal with moral uncertainty: Compare with Richard Ngo’s comment on a recent thread, in a very different context.
I’m looking forward to reading these critiques! A few thoughts from me on the person-affecting views critique:
Most people, myself included, find existence non-comparativism a bit bonkers. This is because most people accept that if you could create someone who you knew with certainty would live a dreadful life, that you shouldn’t create them, or at least that it would be better if you didn’t (all other things equal). So when you say that existence non-comparativism is highly plausible, I’m not so sure that is true...
Arguing that existence non-comparativism and the person-affecting principle (PAP) are plausible isn’t enough to argue for a person-affecting view (PAV), because many people reject PAVs on account of their unpalatable conclusions (which can signal that underlying motivations for PAVs are flawed). My understanding is that the most common objection of PAVs is that they run into the non-identity problem, implying for example that there’s nothing wrong with climate change and making our planet a hellscape, because this won’t make lives worse for anyone in particular as climate change itself will change the identities of who comes into existence. Most people agree the non-identity problem is just that...a problem, because not caring about climate change seems a bit stupid. This acts against the plausibility of narrow person-affecting views.
Similarly, if we know people are going to exist in the future, it just seems obvious to most that it would be a good thing, as opposed to a neutral thing, to take measures to improve the future (conditional on the fact that people will exist).
It has been that argued that moral uncertainty over population axiology pushes one towards actions endorsed by a total view even if one’s credence in these theories is low. This assumes one uses an expected moral value approach to dealing with moral uncertainty. This would in turn imply that having non-trivial credence in a narrow PAV isn’t really a problem for longtermists. So I think you have to do one of the following:
Argue why this Greaves/Ord paper has flawed reasoning
Argue that we can have zero or virtually zero credence in total views
Argue why an expected moral value approach isn’t appropriate for dealing with moral uncertainty (this is probably your best shot...)
Also maximizing expected choice-worthiness with intertheoretic comparisons can lead to fanaticism focusing on quantum branching actually increasing the number of distinct moral patients (rather aggregating over the quantum measure and effectively normalizing), and that can have important consequences. See this discussion and my comment.
FWIW, I’ve comprehensively done this in my moral anti-realism sequence. In the post Moral Realism and Moral Uncertainty Are in Tension, I argue that you cannot be morally uncertain and a confident moral realist. Then, in The “Moral Uncertainty” Rabbit Hole, Fully Excavated, I explain how moral uncertainty works if it comes with metaethical uncertainty and I discuss wagers in favor of moral realism and conditions where they work and where they fail. (I posted the latter post on April 1st thinking people would find it a welcome distraction to read something serious next to all the silly posts, but it got hardly any views, sadly.) The post ends with a list of pros and cons for “good vs. bad reasons for deferring to (more) moral reflection.” I’ll link to that section here because it summarizes under which circumstances you can place zero or virtually zero credence in some view that other sophisticated reasoners consider appealing.
On 3, I actually haven’t read the paper yet, so should probably do that, but I have a few objections:
Intertheoretic comparisons seem pretty arbitrary and unjustified. Why should there be any fact of the matter about them? If you choose some values to identify across different theories, you have to rule out alternative choices.
The kind of argument they use would probably support widespread value lexicality over a continuous total view. Consider lexical threshold total utilitarianism with multiple thresholds. For any such view (including total utilitarianism without lexical thresholds), if you add a(nother) greater threshold past the others and normalize by values closer to 0 than the new threshold, then the new view and things past the threshold will dominate the previous one view and things closer to 0, respectively. I think views like maximin/leximin and maximax/leximax would dominate all forms of utilitarianism, including lexical threshold utilitarianism, because they’re effectively lexical threshold utilitarianim with lexical thresholds at every welfare level.
Unbounded utility functions, like risk-neutral expected value maximizing total utilitarianism, are vulnerable to Dutch books and money pumps, and violate the sure-thing principle, due to finite-valued lotteries with infinite or undefined expectations, like St. Petersburg lotteries. See, e.g. Paul Christiano’s comment here: https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=hrsLNxxhsXGRH9SRx So, if we think it’s rationally required to avoid Dutch books or money pumps in principle, or satisfy the sure-thing principle, and finite value but infinite expectated value lotteries can’t be ruled out with certainty, then risk-neutral EV-maximizing total utilitarianism is ruled out.
When it comes to comparisons of values between PAVs and total views I don’t really see much of a problem as I’m not sure the comparison is actually inter-theoretic. Both PAVs and total views are additive, consequentialist views in which welfare is what has intrinsic value. It’s just the case that some things count under a total view that don’t under (many) PAVs i.e. the value of a new life. So accounting for both PAVs and a total view in a moral uncertainty framework doesn’t seem too much of a problem to me.
What about genuine inter-theoretic comparisons e.g. between deontology and consequentialism? Here I’m less sure but generally I’m inclined to say there still isn’t a big issue. Instead of choosing specific values, we can choose ‘categories’ of value. Consider a meteor hurtling to earth destined to wipe us all out. Under a total view we might say it would be “astronomically bad” to let the meteor wipe us out. Under a deontological view we might say it is “neutral” as we aren’t actually doing anything wrong by letting the meteor wipe us out (if you have a view that invokes an act/omission distinction). So what I’m doing here is assigning categories such as “astronomically bad”, “very bad”, “bad”, “neutral”, “good” etc. to acts under different ethical views—which seems easy enough. We can then use these categories in our moral uncertainty reasoning. This doesn’t seem that arbitrary to me, although I accept it may still run into issues.
PAVs and total views are different theories, so the comparisons are intertheoretic, by definition. Even if they agree on many rankings (in fixed population cases, say), they do so for different reasons. The value being compared is actually of a different kind, as total utilitarian value is non-comparative, but PA value is comparative.
These vague categories might be useful and they do seem kind of intuitive to me, but
“Astronomically bad” effectively references the size of an affected population and hints at aggregation, so I’m not sure it’s a valid category at all for intertheoretic comparisons. Astronomically bad things are also not consistently worse than things that are not astronomically bad under all views, especially lexical views and some deontological views. You can have something which is astronomically bad on leximin (or another lexical view) due to an astronomically large (sub)population made worse off, but which is dominated by effects limited to a small (sub)population in another outcome that’s not astronomically bad. Astronomically bad might still be okay to use for person-affecting utilitarianism (PAU) vs total utilitarianism, though.
“Infinitely bad” (or “infinitely bad of a certain cardinality”) could be used to a similar effect, making lexical views dominate over classical utilitarianism (unless you use lexically “amplified” versions of classical utilitarianism, too). Things can break down if we have infinitely many different lexical thresholds, though, since there might not be a common scale to put them on if the thresholds’ orders are incompatible, but if we allow pairwise comparisons at least where there are only finitely many thresholds, we’d still have classical utilitarianism dominated by lexical threshold utilitarian views with finitely many lexical thresholds, and when considering them all together, this (I would guess) effectively gives us leximin, anyway.
These kinds of intuitive vague categories aren’t precise enough to fix exactly one normalization for each theory for the purpose of maximizing some kind of expected value over and across theories, and the results will be sensitive to which normalizations are chosen, which will also be basically somewhat arbitrary. If you used precise categories, you’d still have arbitrariness to deal with in assigning to categories on each view.
Comparisons between theories A and B, theories B and C and theories A and C might not be consistent with each other, unless you find a single common scale for all three theories. This limits what kinds of categories you can use to those that are universally applicable if you want to take expected values across all theories at once. You also still need the categories and the theories to be basically roughly cardinally (ratio scale) interpretable to use expected values across theories with intertheoretic comparisons, but some theories are not cardinally interpretable at all.
Vague categories like “very bad” that don’t reference objective cardinal numbers (even imprecisely) will probably not be scope-sensitive in a way that makes the total view dominate over PAVs. On a PAV according to which death is bad, killing 50% of people would plausibly hit the highest category, or near it. The gaps between the categories won’t be clear or even necessarily consistent across theories. So, I think you really need to reference cardinal numbers in these categories if you want the total view to dominate PAVs with this kind of approach.
Expected values don’t even make sense on some theories, those which are not cardinally interpretable, so it’s weird to entertain such theories and therefore the possibility that expected value reasoning is wrong, and then force them into an expected value framework anyway. If you entertain the possibility of expected value reasoning being wrong at the normative level, you should probably do so for handling moral uncertainty, too.
Some comparisons really seem to be pretty arbitrary. Consider weak negative hedonistic total utilitarianism vs classical utilitarianism, where under the weak NU view, pleasure matters 1/X times as much as suffering, or suffering matters X times more than pleasure. There are at least two possible normalizations here: a. suffering matters equally on each view, but pleasure matters X times less on weak NU view than on CU, and b. pleasure matters equally on each view, but suffering matters X times more on the weak NU view relative to pleasure on each view. When X is large enough, the vague intuitive categories probably won’t work, and you need some way to resolve this problem. If you include both comparisons, then you’re effectively splitting one of the views into two with different cardinal strengths. To me, this undermines intertheoretic comparisons if you have two different views which make exactly the same recommendations and for (basically) the same reasons, but have different cardinal strengths. Where do these differences in cardinal strengths come from? MacAskill, Bykvist and Ord call these “amplifications” of theories in their book, and I think suggest that they will come from some universal absolute scale common across theories (chapter 6 , section VII), but they don’t explain where this scale actually comes from.
My understanding is that those who support such intertheoretic comparisons only do so in limited cases anyway and so would want to combine them with another approach where intertheoretic comparisons aren’t justified. My impression is also that using intertheoretic comparisons but saying nothing when intertheoretic comparisons aren’t justified is the least general/applicable approach of those typically discussed, because it requires ratio-scale comparisons. You can use variance voting with interval-scale comparisons, and you can basically always use moral parliament or “my favourite theory”.
Some of the above objections are similar to those in this chapter by MacAskill, Bykvist and Ord, and the book generally.
About the non-identity problem: Arden Koehler wrote a review a while ago about a paper that attempts to solve it (and other problems) for person-affecting views. I don’t remember if I read the review to the end, but the idea is interesting.
About the correct way to deal with moral uncertainty: Compare with Richard Ngo’s comment on a recent thread, in a very different context.