That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn’t change the ratio of your EV estimates for x-risk reduction and bednets, since the number appears on both sides of your equations. Here’s a link to another essay making related points.
I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.
I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.
That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn’t change the ratio of your EV estimates for x-risk reduction and bednets, since the number appears on both sides of your equations. Here’s a link to another essay making related points.
Ah yes! I think I see what you mean.
I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.
I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.