Is there a principled place to disembark the crazy train?
To elaborate, if we take EV-maximization seriously, this appears to have non-intuitive implications about e.g. small animals being of overwhelming moral importance in aggregate, the astronomical value of X-risk reduction, the possibility of infinite amounts of (dis)value, suffering in fundamental physics (in roughly ascending order of intuitive craziness to me).
But rejecting EV maximization also seems problematic.
Good question, but I don’t have a good answer. My answer is more pragmatic than principled (see, for example, my previous response to Devon Fritz’s question about what EA is getting most wrong.)
Is there a principled place to disembark the crazy train?
To elaborate, if we take EV-maximization seriously, this appears to have non-intuitive implications about e.g. small animals being of overwhelming moral importance in aggregate, the astronomical value of X-risk reduction, the possibility of infinite amounts of (dis)value, suffering in fundamental physics (in roughly ascending order of intuitive craziness to me).
But rejecting EV maximization also seems problematic.
Good question, but I don’t have a good answer. My answer is more pragmatic than principled (see, for example, my previous response to Devon Fritz’s question about what EA is getting most wrong.)