Hopefully this is my last comment in this thread, since I don’t think there’s much more I have to say after this.
I don’t really mind if people are working on these problems, but it’s a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don’t know if it’s useful to do this, but it doesn’t seem obviously stupid.)
Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there’s a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ….)
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn’t run into any asymptotic issues?
Hopefully this is my last comment in this thread, since I don’t think there’s much more I have to say after this.
I don’t really mind if people are working on these problems, but it’s a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I don’t know if it’s useful to do this, but it doesn’t seem obviously stupid.)
Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so there’s a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, ….)
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldn’t run into any asymptotic issues?