I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don’t find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be “Effective Altruism is making an intellectual mistake” whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.
The comments etc. then just seem to have mostly been people explaining why they don’t find your moral intuition that ‘non-purely experientially determined’ and ‘purely experientially determined’ amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.
I think I would have found this post more conceptually clear if it had been structured:
EA conclusions actually require an additional moral assumption/axiom—and so if you don’t agree with this assumption then you should not obviously follow EA advice.
(Optionally) Why you find the moral assumption unconvincing/unlikely
(Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.
Where throughout the assumption is the commensuratabilitly of ‘non-purely experientially determined’ and ‘purely experientially determined’ experience.
In general I am not very sure what you had in mind as the ideal outcome of this post. I’m surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.
(Note I also made this as a top level comment so it would be less buried, so it might make more sense to respond (if you would like to) there)
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don’t find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be “Effective Altruism is making an intellectual mistake” whereas you just actually seem to have a different set of moral intuitions from those involved in EA, which are largely incompatible with effective altruism as it currently practiced. Whilst you could describe moral differences as intellectual mistakes, this does not seem to be a standard or especially helpful usage.
The comments etc. then just seem to have mostly been people explaining why they don’t find your moral intuition that ‘non-purely experientially determined’ and ‘purely experientially determined’ amounts of pain cannot be compared compelling. Since we seem to have reached a point where there seems to be a fundamental disagreement about considered moral values, it does not seem that attempting to change each others minds is very fruitful.
I think I would have found this post more conceptually clear if it had been structured:
EA conclusions actually require an additional moral assumption/axiom—and so if you don’t agree with this assumption then you should not obviously follow EA advice.
(Optionally) Why you find the moral assumption unconvincing/unlikely
(Extra Optionally) Tentative suggestions for what should be done in the absence of the assumption.
Where throughout the assumption is the commensuratabilitly of ‘non-purely experientially determined’ and ‘purely experientially determined’ experience.
In general I am not very sure what you had in mind as the ideal outcome of this post. I’m surprised if you thought most EAs agreed with you on your moral intuition, since so much of EA is predicated on its converse (as is much of established consequential thinking etc.). But equally I am not sure what value we can especially bring to you if you feel very sure in your conviction that the assumption does not hold.
(Note I also made this as a top level comment so it would be less buried, so it might make more sense to respond (if you would like to) there)