EDIT: Maybe what you mean is that differences in empirical beliefs are not as important, perhaps because we don’t disagree so much on empirical beliefs in a way that significantly influences prioritization? That seems plausible to me.
Thanks for writing this! I agree that moral (or normative) intuitions are pretty decisive, although I’d say empirical features of the world and our beliefs about them are similarly important, so I’m not sure I agree with “mostly moral intuition”. For example, if chickens weren’t farmed so much (in absolute and relative numbers), we wouldn’t be prioritizing chicken welfare, and, more generally, if people didn’t farm animals in large numbers, we wouldn’t prioritize farm animal welfare. If AGI seemed impossible or much farther off, or other extinction risks seemed more likely, we would give less weight to AI risks relative to other extinction risks. Between the very broad “direct” EA causes (say global health and poverty, animal welfare, x-risks (or extinction risks and s-risks, separately)), what an individual EA prioritizes seems to be mostly based on moral intuition, but that we’re prioritizing these specific causes at all (rather than say homelessness or climate change), and the prioritization of specific interventions or sub-causes depends a lot on empirical beliefs.
Also, a nitpick on person-affecting views:
Rejecting person-affecting views, and the principle of neutrality, is required for (strong) longtermism.
The principle of neutrality is compatible with concern for future people and longtermism (including strong longtermism) if you reject the independence of irrelevant alternatives or transitivity, which most person-affecting views do (although not all are concerned with future people). You can hold wide person-affecting views, so that it’s better for a better off person to be born than a worse off person, and we should ensure better off people come to exist than people who would be worse off, even if we should be indifferent to whether any (or how many) such additional people exist at all. There’s also the possibility that people alive today could live for millions of years.
Asymmetric person-affecting views, like Meacham’s to which you link, reject neutrality, because bad lives should be prevented, and are also compatible with (strong) longtermism. They might recommend ensuring future moral patients are as well off as possible or reducing s-risks. See also Thomas, 2019 for asymmetric views that allow offsetting bad lives with good lives, but not outweighing bad lives with good lives, and section 6 for practical implications.
Finally, with respect to the procreation asymmetry in particular, I think Meacham’s approach offers some useful insights into how to build person-affecting views, but I think he doesn’t really offer much defense of the asymmetry (or harm-minimization) itself and instead basically takes it for granted, if I recall correctly. I would recommend actualist accounts and Frick’s account. Some links and discussion in my comment here.
Thanks for the thoughtful reply Michael! I think I was thinking more along what you said in your edit: empirical beliefs are very important, but we (or EAs at least) don’t really disagree on them e.g. objectively there are billions of chickens killed for food each year. Furthermore, we can actually resolve empirical disagreements with research etc such that if we do hold differing empirical views, we can find out who is actually right (or closer to the truth). On the other hand, with moral questions, it feels like you can’t actually resolve a lot of these in any meaningful way to find one correct answer. As a result, roughly holding empirical beliefs constant, moral beliefs seem to be a crux that decides what you prioritise.
I also agree with your point that differing moral intuitions probably lead to different views on worldview prioritisation (e.g. animal welfare vs global poverty) rather than intervention prioritisation (although this is also true for things like StrongMinds vs AMF).
Also appreciate the correction on person-affecting views (I feel like I tried to read a bunch of stuff on the Forum about this, including a lot from you, but still get a bit muddled up!). Will read some of the links you sent and amend the main post.
EDIT: Maybe what you mean is that differences in empirical beliefs are not as important, perhaps because we don’t disagree so much on empirical beliefs in a way that significantly influences prioritization? That seems plausible to me.
Thanks for writing this! I agree that moral (or normative) intuitions are pretty decisive, although I’d say empirical features of the world and our beliefs about them are similarly important, so I’m not sure I agree with “mostly moral intuition”. For example, if chickens weren’t farmed so much (in absolute and relative numbers), we wouldn’t be prioritizing chicken welfare, and, more generally, if people didn’t farm animals in large numbers, we wouldn’t prioritize farm animal welfare. If AGI seemed impossible or much farther off, or other extinction risks seemed more likely, we would give less weight to AI risks relative to other extinction risks. Between the very broad “direct” EA causes (say global health and poverty, animal welfare, x-risks (or extinction risks and s-risks, separately)), what an individual EA prioritizes seems to be mostly based on moral intuition, but that we’re prioritizing these specific causes at all (rather than say homelessness or climate change), and the prioritization of specific interventions or sub-causes depends a lot on empirical beliefs.
Also, a nitpick on person-affecting views:
The principle of neutrality is compatible with concern for future people and longtermism (including strong longtermism) if you reject the independence of irrelevant alternatives or transitivity, which most person-affecting views do (although not all are concerned with future people). You can hold wide person-affecting views, so that it’s better for a better off person to be born than a worse off person, and we should ensure better off people come to exist than people who would be worse off, even if we should be indifferent to whether any (or how many) such additional people exist at all. There’s also the possibility that people alive today could live for millions of years.
Asymmetric person-affecting views, like Meacham’s to which you link, reject neutrality, because bad lives should be prevented, and are also compatible with (strong) longtermism. They might recommend ensuring future moral patients are as well off as possible or reducing s-risks. See also Thomas, 2019 for asymmetric views that allow offsetting bad lives with good lives, but not outweighing bad lives with good lives, and section 6 for practical implications.
Finally, with respect to the procreation asymmetry in particular, I think Meacham’s approach offers some useful insights into how to build person-affecting views, but I think he doesn’t really offer much defense of the asymmetry (or harm-minimization) itself and instead basically takes it for granted, if I recall correctly. I would recommend actualist accounts and Frick’s account. Some links and discussion in my comment here.
Thanks for the thoughtful reply Michael! I think I was thinking more along what you said in your edit: empirical beliefs are very important, but we (or EAs at least) don’t really disagree on them e.g. objectively there are billions of chickens killed for food each year. Furthermore, we can actually resolve empirical disagreements with research etc such that if we do hold differing empirical views, we can find out who is actually right (or closer to the truth). On the other hand, with moral questions, it feels like you can’t actually resolve a lot of these in any meaningful way to find one correct answer. As a result, roughly holding empirical beliefs constant, moral beliefs seem to be a crux that decides what you prioritise.
I also agree with your point that differing moral intuitions probably lead to different views on worldview prioritisation (e.g. animal welfare vs global poverty) rather than intervention prioritisation (although this is also true for things like StrongMinds vs AMF).
Also appreciate the correction on person-affecting views (I feel like I tried to read a bunch of stuff on the Forum about this, including a lot from you, but still get a bit muddled up!). Will read some of the links you sent and amend the main post.