Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This sort of estimate is in general off by many orders of magnitude for thinking about the ratio of impact between different interventions when it only considers paths to very large numbers for the intervention under consideration, and not to reference interventions being compared against. For example, the expected number of lives saved from giving a bednet is infinite. Connecting to size-of-the-accessible-universe estimates, perhaps there are many simulations of situations like ours at an astronomical scale, and so our decisions will be replicated and have effects on astronomical scales.
Any argument purporting to show <20 OOM in cost-effectiveness from astronomical waste considerations is almost always wrong for this kind of reason.
Hey Carl! Thanks for your comment. I am not sure I understand. Are you arguing something like “comparing x-risk interventions to other inventions such as bed nets is invalid because the universe may be infinite, or there may be a lot of simulations, or some other anthropic reason may make other interventions more valuable”?
That there are particular arguments for decisions like bednets or eating sandwiches to have expected impacts that scale with the scope of the universes or galactic civilizations. E.g. the more stars you think civilization will be able to colonize, or the more computation that will be harvested, the greater your estimate of the number of sims in situations like ours (who will act the same as we do, so that on plausible decision theories we should think of ourselves as setting policy at least for the psychologically identical ones). So if you update to think that civilization will be able to generate 10^40 minds per star instead of 10^30, that shouldn’t change the ratio of your EV estimates for x-risk reduction and bednets, since the number appears on both sides of your equations. Here’s a link to another essay making related points.
Ah yes! I think I see what you mean.
I hope to research topics related to this in the near future, including in-depth research on anthropics, as well as on what likely/desirable end-states of the universe are (including that we may already be in an end-state simulation) and what that implies for our actions.
I think this could be a 3rd reason for acting to create a high amount of well-being for those close to you in proximity, including yourself.
I want to point out something that I find confusing.
This can’t be true unless your credence that you’re killing an infinite number of lives by buying a bednet is exactly zero, right? Otherwise—if your credence is, say, 10−1010101010-- then the expected number of lives saved is undefined. Am I thinking about this correctly?
Expected lives saved and taken are both infinite, yes.
I agree with the rest of your comment, but I’m a bit confused about this phrasing.
I could be wrong, but I think he meant to use “>” instead of “<”
You might be interested in Gregory Lewis’ person-affecting value of existential risk reduction CE estimate (Guesstimate model), which arrives at a mean ‘cost per life year’ of $1,500-$26,000 (mean $9,200) via this chain of reasoning. My sense is that it’s a lot lower than even your pessimistic estimate mainly due to the person-affecting view constraint, but the takeaways still favor continued work & funding on reducing x-risk. Quoting Lewis:
The comments section in that post surfaces a number of other x-risk CE estimates too. 80K Hours also has a (simpler) CE estimate. All of them seem pretty conservative.
Another remark is on discount rate, which you didn’t seem to include in your post (maybe I missed it?). The discount rate effectively determines whether long- or near-termism is the best use of philanthropic resources is a post by professional cost-effectiveness modeller Froolow that explores this in more detail, using threshold analysis and assuming exponential discounting (although I suspect many people’s actual discount rates including mine look more like Will MacAskill’s, which is more lay-intuitive if not very mathematically nice).
Thanks Mo! These estimates were very interesting.
As to discount rates, I was a bit confused reading William MacAskill’s discount rate post, it wasn’t clear to me that he was talking about the moral value of lives in the future, it seemed like it might be having something to do with value of resources. In “What We Owe The Future” which is much more recent, I think MacAskill argues quite strongly that we should have a zero discount rate for the moral patienthood of future people.
In general, I tend to use a zero discount rate, I will add this to the background assumptions section, as I do think it is an important point. In my opinion, future people and their experience do not have any more or less valuable than people live today, though of course other people may differ. I try to address this somewhat in the section titled “Inspiration.”
A quick question: how did you come to the “high quality longtermism work pays $100/h”? Is this just based on salaries for similar positions?
Good question. Like most numbers in this post, it is just a very rough approximation used because it is a round number that I estimate is relatively close (~within an order of magnitude) to the actual number. I would guess that the number is somewhere between $50 and $200.