This sort of estimate is in general off by many orders of magnitude for thinking about the ratio of impact between different interventions when it only considers paths to very large numbers for the intervention under consideration, and not to reference interventions being compared against. For example, the expected number of lives saved from giving a bednet is infinite. Connecting to size-of-the-accessible-universe estimates, perhaps there are many simulations of situations like ours at an astronomical scale, and so our decisions will be replicated and have effects on astronomical scales.
Any argument purporting to show <20 OOM in cost-effectiveness from astronomical waste considerations is almost always wrong for this kind of reason.
Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?
That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn’t change the ratio of your EV estimates for x-risk reduction and bednets, since the number appears on both sides of your equations. Here’s a link to another essay making related points.
I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.
I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.
This can’t be true unless your credence that you’re killing an infinite number of lives by buying a bednet is exactly zero, right? Otherwise—if your credence is, say, 10−1010101010-- then the expected number of lives saved is undefined. Am I thinking about this correctly?
This sort of estimate is in general off by many orders of magnitude for thinking about the ratio of impact between different interventions when it only considers paths to very large numbers for the intervention under consideration, and not to reference interventions being compared against. For example, the expected number of lives saved from giving a bednet is infinite. Connecting to size-of-the-accessible-universe estimates, perhaps there are many simulations of situations like ours at an astronomical scale, and so our decisions will be replicated and have effects on astronomical scales.
Any argument purporting to show <20 OOM in cost-effectiveness from astronomical waste considerations is almost always wrong for this kind of reason.
Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?
That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn’t change the ratio of your EV estimates for x-risk reduction and bednets, since the number appears on both sides of your equations. Here’s a link to another essay making related points.
Ah yes! I think I see what you mean.
I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.
I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.
I want to point out something that I find confusing.
This can’t be true unless your credence that you’re killing an infinite number of lives by buying a bednet is exactly zero, right? Otherwise—if your credence is, say, 10−1010101010-- then the expected number of lives saved is undefined. Am I thinking about this correctly?
Expected lives saved and taken are both infinite, yes.
I agree with the rest of your comment, but I’m a bit confused about this phrasing.
I could be wrong, but I think he meant to use “>” instead of “<”