Thanks for clarifying! I think I get what you’re saying. This certainly is a rabbit hole. But to bring it back to the points that I initially tried to make, I’m kind of struggling to figure out what the upshot would be. The following seem to me to be possible take-aways:
1.) While the considerations in the ballpark of what I’ve presented do have counterintuitive implications (if we’re spawning infinite divisions every second, that must have some hefty implications for how we should and shouldn’t act, mustn’t it?), fanaticism per se doesn’t have any weird implications for how we should be behaving because it is fairly likely that we’re already producing infinite amounts of value and so long shots don’t enter into it.
2.) Fanaticism per se doesn’t have any weird implications for how we should be behaving, because it is fairly likely that the best ways to produce stupendous amounts of value happen to align closely with what commonsense EA suggests we should be doing anyway. (I like Michael St. Jules approach to this that says we should promote the long-term future of humanity so we have the chance to research possible transfinite amounts of value.)
3.) These issues are so complicated that there is no way to know what to do if we’re going fanatical, so even if trying to create branches appears to have more expected utility than ordinary altruistic actions, we should stick to the ordinary altruistic actions to avoid opening up that can of worms.
Thanks for clarifying! I think I get what you’re saying. This certainly is a rabbit hole. But to bring it back to the points that I initially tried to make, I’m kind of struggling to figure out what the upshot would be. The following seem to me to be possible take-aways:
1.) While the considerations in the ballpark of what I’ve presented do have counterintuitive implications (if we’re spawning infinite divisions every second, that must have some hefty implications for how we should and shouldn’t act, mustn’t it?), fanaticism per se doesn’t have any weird implications for how we should be behaving because it is fairly likely that we’re already producing infinite amounts of value and so long shots don’t enter into it.
2.) Fanaticism per se doesn’t have any weird implications for how we should be behaving, because it is fairly likely that the best ways to produce stupendous amounts of value happen to align closely with what commonsense EA suggests we should be doing anyway. (I like Michael St. Jules approach to this that says we should promote the long-term future of humanity so we have the chance to research possible transfinite amounts of value.)
3.) These issues are so complicated that there is no way to know what to do if we’re going fanatical, so even if trying to create branches appears to have more expected utility than ordinary altruistic actions, we should stick to the ordinary altruistic actions to avoid opening up that can of worms.