EDIT: Maybe what you mean is that differences in empirical beliefs are not as important, perhaps because we donât disagree so much on empirical beliefs in a way that significantly influences prioritization? That seems plausible to me.
Thanks for writing this! I agree that moral (or normative) intuitions are pretty decisive, although Iâd say empirical features of the world and our beliefs about them are similarly important, so Iâm not sure I agree with âmostly moral intuitionâ. For example, if chickens werenât farmed so much (in absolute and relative numbers), we wouldnât be prioritizing chicken welfare, and, more generally, if people didnât farm animals in large numbers, we wouldnât prioritize farm animal welfare. If AGI seemed impossible or much farther off, or other extinction risks seemed more likely, we would give less weight to AI risks relative to other extinction risks. Between the very broad âdirectâ EA causes (say global health and poverty, animal welfare, x-risks (or extinction risks and s-risks, separately)), what an individual EA prioritizes seems to be mostly based on moral intuition, but that weâre prioritizing these specific causes at all (rather than say homelessness or climate change), and the prioritization of specific interventions or sub-causes depends a lot on empirical beliefs.
Also, a nitpick on person-affecting views:
Rejecting person-affecting views, and the principle of neutrality, is required for (strong) longtermism.
The principle of neutrality is compatible with concern for future people and longtermism (including strong longtermism) if you reject the independence of irrelevant alternatives or transitivity, which most person-affecting views do (although not all are concerned with future people). You can hold wide person-affecting views, so that itâs better for a better off person to be born than a worse off person, and we should ensure better off people come to exist than people who would be worse off, even if we should be indifferent to whether any (or how many) such additional people exist at all. Thereâs also the possibility that people alive today could live for millions of years.
Asymmetric person-affecting views, like Meachamâs to which you link, reject neutrality, because bad lives should be prevented, and are also compatible with (strong) longtermism. They might recommend ensuring future moral patients are as well off as possible or reducing s-risks. See also Thomas, 2019 for asymmetric views that allow offsetting bad lives with good lives, but not outweighing bad lives with good lives, and section 6 for practical implications.
Finally, with respect to the procreation asymmetry in particular, I think Meachamâs approach offers some useful insights into how to build person-affecting views, but I think he doesnât really offer much defense of the asymmetry (or harm-minimization) itself and instead basically takes it for granted, if I recall correctly. I would recommend actualist accounts and Frickâs account. Some links and discussion in my comment here.
Thanks for the thoughtful reply Michael! I think I was thinking more along what you said in your edit: empirical beliefs are very important, but we (or EAs at least) donât really disagree on them e.g. objectively there are billions of chickens killed for food each year. Furthermore, we can actually resolve empirical disagreements with research etc such that if we do hold differing empirical views, we can find out who is actually right (or closer to the truth). On the other hand, with moral questions, it feels like you canât actually resolve a lot of these in any meaningful way to find one correct answer. As a result, roughly holding empirical beliefs constant, moral beliefs seem to be a crux that decides what you prioritise.
I also agree with your point that differing moral intuitions probably lead to different views on worldview prioritisation (e.g. animal welfare vs global poverty) rather than intervention prioritisation (although this is also true for things like StrongMinds vs AMF).
Also appreciate the correction on person-affecting views (I feel like I tried to read a bunch of stuff on the Forum about this, including a lot from you, but still get a bit muddled up!). Will read some of the links you sent and amend the main post.
EDIT: Maybe what you mean is that differences in empirical beliefs are not as important, perhaps because we donât disagree so much on empirical beliefs in a way that significantly influences prioritization? That seems plausible to me.
Thanks for writing this! I agree that moral (or normative) intuitions are pretty decisive, although Iâd say empirical features of the world and our beliefs about them are similarly important, so Iâm not sure I agree with âmostly moral intuitionâ. For example, if chickens werenât farmed so much (in absolute and relative numbers), we wouldnât be prioritizing chicken welfare, and, more generally, if people didnât farm animals in large numbers, we wouldnât prioritize farm animal welfare. If AGI seemed impossible or much farther off, or other extinction risks seemed more likely, we would give less weight to AI risks relative to other extinction risks. Between the very broad âdirectâ EA causes (say global health and poverty, animal welfare, x-risks (or extinction risks and s-risks, separately)), what an individual EA prioritizes seems to be mostly based on moral intuition, but that weâre prioritizing these specific causes at all (rather than say homelessness or climate change), and the prioritization of specific interventions or sub-causes depends a lot on empirical beliefs.
Also, a nitpick on person-affecting views:
The principle of neutrality is compatible with concern for future people and longtermism (including strong longtermism) if you reject the independence of irrelevant alternatives or transitivity, which most person-affecting views do (although not all are concerned with future people). You can hold wide person-affecting views, so that itâs better for a better off person to be born than a worse off person, and we should ensure better off people come to exist than people who would be worse off, even if we should be indifferent to whether any (or how many) such additional people exist at all. Thereâs also the possibility that people alive today could live for millions of years.
Asymmetric person-affecting views, like Meachamâs to which you link, reject neutrality, because bad lives should be prevented, and are also compatible with (strong) longtermism. They might recommend ensuring future moral patients are as well off as possible or reducing s-risks. See also Thomas, 2019 for asymmetric views that allow offsetting bad lives with good lives, but not outweighing bad lives with good lives, and section 6 for practical implications.
Finally, with respect to the procreation asymmetry in particular, I think Meachamâs approach offers some useful insights into how to build person-affecting views, but I think he doesnât really offer much defense of the asymmetry (or harm-minimization) itself and instead basically takes it for granted, if I recall correctly. I would recommend actualist accounts and Frickâs account. Some links and discussion in my comment here.
Thanks for the thoughtful reply Michael! I think I was thinking more along what you said in your edit: empirical beliefs are very important, but we (or EAs at least) donât really disagree on them e.g. objectively there are billions of chickens killed for food each year. Furthermore, we can actually resolve empirical disagreements with research etc such that if we do hold differing empirical views, we can find out who is actually right (or closer to the truth). On the other hand, with moral questions, it feels like you canât actually resolve a lot of these in any meaningful way to find one correct answer. As a result, roughly holding empirical beliefs constant, moral beliefs seem to be a crux that decides what you prioritise.
I also agree with your point that differing moral intuitions probably lead to different views on worldview prioritisation (e.g. animal welfare vs global poverty) rather than intervention prioritisation (although this is also true for things like StrongMinds vs AMF).
Also appreciate the correction on person-affecting views (I feel like I tried to read a bunch of stuff on the Forum about this, including a lot from you, but still get a bit muddled up!). Will read some of the links you sent and amend the main post.