Thanks for starting this discussion on here! I feel like part of your conclusion could also go in the opposite direction:
Animals could be less morally important because their suffering is less sophisticated in some morally relevant sense.
Increasing lifespans of sophisticated beings now who have built the capacity to cope well with pain could be a great intervention (before the intelligence explosion).
I don’t understand how point 1 is possible—sure, given the model the maximum could be higher than all animals, or even than all humans, but this contradicts my experience. My experience is that children suffer more intensely than adults, and given the emotional complexity of many higher mammals, they are in those terms more sophisticated beings than babies, if not toddlers.
Regarding point 2, yes, that could reduce average suffering, which matters for average utilitarians, but does not mitigate experienced suffering for any other beings, which I think most other strains of utilitarianism would care about more.
1) I don’t think we can say much about intensity either. But let’s assume that intensity is equal for fully conscious entities (whatever that means). If we then assume that there might be different dimensions to suffering, more sophisticated beings could suffer on “more (morally relevant) levels” than less sophisticated beings.
2) I also think it matters to other forms of consequentialism through flow through effects of highly resilient beings being capable to more effectively help those who aren’t.
As I said in response to a different comment, I don’t object to making the claim that we should treat them as morally equal due to ignorance, but that’s very different from your claim that we can assume the intensities are equal.
I’m also not sure what to do with the claim that there might be different morally relevant dimensions that we cannot collapse, because if that is true, we are in a situation where 1-point of “artistic sufferring” is incommensurable with 1-billion points of “physical pain.” If so, we’re punting—because we do in fact make decisions between options on some basis, despite the supposedly “incommensurable” moral issues.
I do think we might be able to collapse the dimensions and don’t claim intensities, or especially the extreme ends, are equal. Let me try to put it differently: depending on how to collapse the dimensions into one, we could end up with the more complex individuals having larger scales. Ergo they could weigh more into our calculus.
A beings expression of the intensity is probably always in relation to its individual scale. I guess I don’t understand how that is necessarily much of an indicator of the absolute intensity of the experience. Is that where we actually diverge?
Thanks for starting this discussion on here! I feel like part of your conclusion could also go in the opposite direction:
Animals could be less morally important because their suffering is less sophisticated in some morally relevant sense.
Increasing lifespans of sophisticated beings now who have built the capacity to cope well with pain could be a great intervention (before the intelligence explosion).
I don’t understand how point 1 is possible—sure, given the model the maximum could be higher than all animals, or even than all humans, but this contradicts my experience. My experience is that children suffer more intensely than adults, and given the emotional complexity of many higher mammals, they are in those terms more sophisticated beings than babies, if not toddlers.
Regarding point 2, yes, that could reduce average suffering, which matters for average utilitarians, but does not mitigate experienced suffering for any other beings, which I think most other strains of utilitarianism would care about more.
1) I don’t think we can say much about intensity either. But let’s assume that intensity is equal for fully conscious entities (whatever that means). If we then assume that there might be different dimensions to suffering, more sophisticated beings could suffer on “more (morally relevant) levels” than less sophisticated beings.
2) I also think it matters to other forms of consequentialism through flow through effects of highly resilient beings being capable to more effectively help those who aren’t.
As I said in response to a different comment, I don’t object to making the claim that we should treat them as morally equal due to ignorance, but that’s very different from your claim that we can assume the intensities are equal.
I’m also not sure what to do with the claim that there might be different morally relevant dimensions that we cannot collapse, because if that is true, we are in a situation where 1-point of “artistic sufferring” is incommensurable with 1-billion points of “physical pain.” If so, we’re punting—because we do in fact make decisions between options on some basis, despite the supposedly “incommensurable” moral issues.
I do think we might be able to collapse the dimensions and don’t claim intensities, or especially the extreme ends, are equal. Let me try to put it differently: depending on how to collapse the dimensions into one, we could end up with the more complex individuals having larger scales. Ergo they could weigh more into our calculus.
A beings expression of the intensity is probably always in relation to its individual scale. I guess I don’t understand how that is necessarily much of an indicator of the absolute intensity of the experience. Is that where we actually diverge?