tl;dr I wouldn’t put too much weight on my tweet saying I think I probably wouldn’t be working on x-risk if I knew the world would end in 1,000 years and I don’t think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.
***
Nice post. I agree with the overall message of as well as much of Ben’s comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are unlikely to (directly?) lead to existential catastrophe; b) whether civilization just manages to avoid x-risk v. ends up on track to flourish as much as possible and last a lot longer than (e.g.) the typical mammalian species.
***
That said, I mostly came here to quickly caution against putting too much weight on this:
Howie’s response is interesting to me, as it implies a fairly pessimistic assessment of tractability of x-risks given that 1,000 years would shift the calculations presented here by over an OOM (>10 generations).
That’s mostly for the general reason that I put approximately one Reply Tweet’s worth of effort into it. But here are some specific reasons not to put too much weight on it and also that I don’t think it implies a particularly pessimistic assessment of the tractability of x-risk.[1]
I’m not sure I endorse the Tweet on reflection mostly because of the next point.
I’m not sure if my tweet was accounting for the (expected) size of future generations. A claim I’d feel better about would be “I probably wouldn’t be working on x-risk reduction if I knew there would only be ~10X more beings in the future than are alive today or if I thought the value of future generations was only ~10X more than the present.” My views on the importance of the next 1,000 years depend a lot on whether generations in the coming century are order(s) of magnitude bigger than the current generation (which seems possible if there’s lots of morally relevant digital minds). [2]
I haven’t thought hard about this but I think my estimates of the cost-effectiveness of the top non-longtermist opportunities are probably higher than implied by your table.
I think I put more weight on the badness of being in a factory farm and (probably?) the significance of chickens than implied by Thomas’s estimate.
I think the very best global health interventions are probably more leveraged than giving to GiveWell.
I find animal welfare and global poverty more intuitively motivating than working on x-risk, so the case for working on x-risk had to be pretty strong to get me to spend my career on it. (Partly for reasons I endorse, partly for reasons I don’t.)
I think the experience I had at the time I switched the focus of my career was probably more relevant to global health and animal welfare than x-risk reduction.
My claim was about what I would in fact be doing, not about what I ought to be doing.
[1] Actual view: wildly uncertain and it’s been a while since I last thought about this but something like the numbers from Ben’s newsletter or what’s implied by the 0.01% fund seem within the realm of plausibility to me. Note that, as Ben says, this is my guess for the marginal dollar. I’d guess the cost effectiveness of the average dollar is higher and I might say something different if you caught me on a different day.
[2] Otoh, conditional on the world ending in 1,000 years maybe it’s a lot less likely that we ended up with lots of digital minds?
Thanks for clarifying, and apologies for making an incorrect assumption about your assessment on tractability. I edited your tl;dr and a link to this comment into the post.
tl;dr I wouldn’t put too much weight on my tweet saying I think I probably wouldn’t be working on x-risk if I knew the world would end in 1,000 years and I don’t think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.
***
Nice post. I agree with the overall message of as well as much of Ben’s comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are unlikely to (directly?) lead to existential catastrophe; b) whether civilization just manages to avoid x-risk v. ends up on track to flourish as much as possible and last a lot longer than (e.g.) the typical mammalian species.
***
That said, I mostly came here to quickly caution against putting too much weight on this:
That’s mostly for the general reason that I put approximately one Reply Tweet’s worth of effort into it. But here are some specific reasons not to put too much weight on it and also that I don’t think it implies a particularly pessimistic assessment of the tractability of x-risk.[1]
I’m not sure I endorse the Tweet on reflection mostly because of the next point.
I’m not sure if my tweet was accounting for the (expected) size of future generations. A claim I’d feel better about would be “I probably wouldn’t be working on x-risk reduction if I knew there would only be ~10X more beings in the future than are alive today or if I thought the value of future generations was only ~10X more than the present.” My views on the importance of the next 1,000 years depend a lot on whether generations in the coming century are order(s) of magnitude bigger than the current generation (which seems possible if there’s lots of morally relevant digital minds). [2]
I haven’t thought hard about this but I think my estimates of the cost-effectiveness of the top non-longtermist opportunities are probably higher than implied by your table.
I think I put more weight on the badness of being in a factory farm and (probably?) the significance of chickens than implied by Thomas’s estimate.
I think the very best global health interventions are probably more leveraged than giving to GiveWell.
I find animal welfare and global poverty more intuitively motivating than working on x-risk, so the case for working on x-risk had to be pretty strong to get me to spend my career on it. (Partly for reasons I endorse, partly for reasons I don’t.)
I think the experience I had at the time I switched the focus of my career was probably more relevant to global health and animal welfare than x-risk reduction.
My claim was about what I would in fact be doing, not about what I ought to be doing.
[1] Actual view: wildly uncertain and it’s been a while since I last thought about this but something like the numbers from Ben’s newsletter or what’s implied by the 0.01% fund seem within the realm of plausibility to me. Note that, as Ben says, this is my guess for the marginal dollar. I’d guess the cost effectiveness of the average dollar is higher and I might say something different if you caught me on a different day.
[2] Otoh, conditional on the world ending in 1,000 years maybe it’s a lot less likely that we ended up with lots of digital minds?
Thanks for clarifying, and apologies for making an incorrect assumption about your assessment on tractability. I edited your tl;dr and a link to this comment into the post.
No worries!