I actually agree with you that most people shouldnât be worried about this (hence my disclaimer that this is not for a general audience). But that doesnât mean no one should care about it.
Whether we are concerned about an infinite amount of time or an infinite amount of space doesnât really seem relevant to me at a mathematical level, hence why I grouped them together.
As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostromâs that I linked above, since I think âjust look in a local regionâ is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
Hopefully this is my last comment in this thread, since I donât think thereâs much more I have to say after this.
I donât really mind if people are working on these problems, but itâs a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I donât know if itâs useful to do this, but it doesnât seem obviously stupid.)
Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so thereâs a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, âŚ.)
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldnât run into any asymptotic issues?
Thanks for the feedback. Couple thoughts:
I actually agree with you that most people shouldnât be worried about this (hence my disclaimer that this is not for a general audience). But that doesnât mean no one should care about it.
Whether we are concerned about an infinite amount of time or an infinite amount of space doesnât really seem relevant to me at a mathematical level, hence why I grouped them together.
As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostromâs that I linked above, since I think âjust look in a local regionâ is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
Hopefully this is my last comment in this thread, since I donât think thereâs much more I have to say after this.
I donât really mind if people are working on these problems, but itâs a looooong way from effective altruism.
Taking into account life-forms outside our observable universe for our moral theories is just absurd. Modelling our actions as affecting an infinite number of our descendants feels a lot more reasonable to me. (I donât know if itâs useful to do this, but it doesnât seem obviously stupid.)
Many-worlds is even further away from effective altruism. (And quantum probabilities sum to 1 anyway, so thereâs a natural way to weight all the branches if you want to start shooting people if and only if a photon travels through a particular slit and interacts with a detector, âŚ.)
I think the relevance of this post is that it tentatively endorses some type of time-discounting (and also space-discounting?) in utilitarianism. This could be relevant to considerations of the far future, which many EAs think is very important. Though presumably we could make the asymptotic part of the function as far away as we like, so we shouldnât run into any asymptotic issues?