The issue isn’t just the conflation, but missing a gear about how the two relate.
The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.
Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it’s also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.
In particular, I was concretely assuming “torturing people to death is generally worse than lying.” But, that’s specifically comparing within alike circles. It is now quite plausible to me that lying (or even mild dishonesty) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don’t have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)
The issue isn’t just the conflation, but missing a gear about how the two relate.
The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.
Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it’s also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.
In particular, I was concretely assuming “torturing people to death is generally worse than lying.” But, that’s specifically comparing within alike circles. It is now quite plausible to me that lying (or even mild dishonesty) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don’t have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)