I think it’s reasonable to say “I put some credence on moral views that imply insect suffering is very important and some credence on moral views that imply it’s not important; all things considered, I think it’s moderately important.”
A couple other comments are gesturing at this, but this logic could be applied to all kinds of things: existential risk is probably “either” extremely important or not at all important if you plug different empirical and ethical views into a formula and trust the answer; likewise present-day global health, or political polarization, or developed-world mental health, etc. Eventually, you can either (1) go all in on a particular ethical and meta-ethical theory, (2) be inconsistent, or (3) combine all these considerations into a balanced whole, in which probably a lot of things that pencil as “extremely important” in some views wind up being a moderately high priority. I don’t think it’s obvious that (3) is right, but this post does not make an argument that (1) is right, and I think the burden of proof is on the side arguing explicitly against moderation and intuitive conclusions.
One reason to think (3) is right is to look at the track records. You say you “cannot be a moderate Christian.” I don’t think religious fundamentalists have morally outperformed religious moderates. There are lots of people who take religious values seriously but not fanatically; some of the leaders of the world’s greatest social movements used a lot of religious thinking and rhetoric without trying to follow every letter of the Bible.
If you use a standard expected-value-like method for determining preferences, you still get that insect suffering is very important. Say (for simplicity) you have a 50% credence that aggregate insect suffering is 10,000x more important than aggregate human suffering, and a 50% credence that it’s 0x as important. In expectation, it is 5,000x more important.
If you reject expected value reasoning, then it’s not clear how you can form consistent preferences. Perhaps under a “moral parliament” view, you could allocate 50% of your charitable resources to insects and 50% to humans. IIRC there are some issues with moral parliaments (I think Toby Ord had a paper on it) but there might be some way to make it work.
Note that a world where Insect suffering is 50% to be 10,000x as important as human suffering, and 50% to be 0.0001x as important as human suffering, is also a world where you can say exactly the same thing with humans and insects reversed.
That should make it clear that the ‘in expectation, [insects are] 5000x more important’ claim that follows is false, or more precisely requires additional assumptions.
This is the type of argument I was trying to eliminate when I wrote this:
I don’t know the weeds of the moral parliament view, but my suspicion is that this argument relies on too low of a level of ethical views (that is, “not meta enough”). That’s still just a utilitarian frame with empirical uncertainty. The kind of “credences on different moral views” I have in mind is more like:
I want my moral actions to be guided by some mix of like, 25% bullet-biting utilitarianism (in which case, insects are super important in expectation), 25% virtue ethics (in which case they’re a small consideration—you don’t want to go out of your way to hurt them, but you’re not obligated to do much in particular, and you should be way more focused on people or other animals who you have relationships with and obligations towards), 15% some kind of “stewardship of humanity” (where you maybe just want to avoid actively being a monster but should be focused elsewhere), 10% libertarianism (where it’s quite unclear how you’d treat insects), and 25% spread across other views, which mostly just points towards not being super-fanatical about any of the others. So something like 30% of me thinks insect suffering is a big deal, which is enough for me to take it seriously but not enough for me to drop the stuff that more like 75% of me thinks is a big deal; in other words I think it’s moderately important.
I don’t know what my actual numbers are, and I’m not sure each of these views is really what the respective philosophy would say about insect welfare; I’m just saying, it’s easy in this kind of framework to wind up having lots of moderate priorities that each seem extremely important on certain ethical views.
I think it’s reasonable to say “I put some credence on moral views that imply insect suffering is very important and some credence on moral views that imply it’s not important; all things considered, I think it’s moderately important.”
A couple other comments are gesturing at this, but this logic could be applied to all kinds of things: existential risk is probably “either” extremely important or not at all important if you plug different empirical and ethical views into a formula and trust the answer; likewise present-day global health, or political polarization, or developed-world mental health, etc. Eventually, you can either (1) go all in on a particular ethical and meta-ethical theory, (2) be inconsistent, or (3) combine all these considerations into a balanced whole, in which probably a lot of things that pencil as “extremely important” in some views wind up being a moderately high priority. I don’t think it’s obvious that (3) is right, but this post does not make an argument that (1) is right, and I think the burden of proof is on the side arguing explicitly against moderation and intuitive conclusions.
One reason to think (3) is right is to look at the track records. You say you “cannot be a moderate Christian.” I don’t think religious fundamentalists have morally outperformed religious moderates. There are lots of people who take religious values seriously but not fanatically; some of the leaders of the world’s greatest social movements used a lot of religious thinking and rhetoric without trying to follow every letter of the Bible.
If you use a standard expected-value-like method for determining preferences, you still get that insect suffering is very important. Say (for simplicity) you have a 50% credence that aggregate insect suffering is 10,000x more important than aggregate human suffering, and a 50% credence that it’s 0x as important. In expectation, it is 5,000x more important.
If you reject expected value reasoning, then it’s not clear how you can form consistent preferences. Perhaps under a “moral parliament” view, you could allocate 50% of your charitable resources to insects and 50% to humans. IIRC there are some issues with moral parliaments (I think Toby Ord had a paper on it) but there might be some way to make it work.
Note that a world where Insect suffering is 50% to be 10,000x as important as human suffering, and 50% to be 0.0001x as important as human suffering, is also a world where you can say exactly the same thing with humans and insects reversed.
That should make it clear that the ‘in expectation, [insects are] 5000x more important’ claim that follows is false, or more precisely requires additional assumptions.
This is the type of argument I was trying to eliminate when I wrote this:
https://forum.effectivealtruism.org/posts/atdmkTAnoPMfmHJsX/multiplier-arguments-are-often-flawed
I don’t know the weeds of the moral parliament view, but my suspicion is that this argument relies on too low of a level of ethical views (that is, “not meta enough”). That’s still just a utilitarian frame with empirical uncertainty. The kind of “credences on different moral views” I have in mind is more like:
I don’t know what my actual numbers are, and I’m not sure each of these views is really what the respective philosophy would say about insect welfare; I’m just saying, it’s easy in this kind of framework to wind up having lots of moderate priorities that each seem extremely important on certain ethical views.