Some major uncertainties: (a) Risk of London getting nuked within a month conditional on each of these triggers (b) Value of a life today (i.e. willingness to pay to reduce risk of death in a world with normal levels of nuclear risk) (c) Value of a life in a post-London-gets-nuked world (i.e. willingness to pay to increase chance that Rob Wiblin survives London getting nuked)
(Note: (c) might be higher than b) if it’s the case that one can make more of a difference in the post-nuclear-war world in expectation.)
Using the 16 micromorts per month risk of death by nuke of staying in London estimate from March 6th[1] and assuming you’d be willing to pay $10-$100M[2] of your own money to avert your death (i.e. $10-$100 per micromort), that means on March 6th it would have made sense (while ignoring nuclear risk) to leave London for a month if you’d rather (taking into account altruistic impacts) leave London for a month than pay $160-$1,600 to stay for a month (or alternatively that you’d leave London for a month if someone paid you $160-$1,600 to do so).
I think that triggers 1-9 probably all increase the risk of London getting nuked to at least 2x what the risk was on March 6th, so assuming you’d be happy to leave for a month for $320-$3,200 (ignoring nuclear risk) (which seems reasonable to me if your productivity doesn’t take a significant hit), then I think I agree with your assessment of whether to leave.
However, it seems worth noting that for a lot of EAs working in London whose work would take a significant hit by leaving London, it is probably the case that they shouldn’t leave in some of the scenarios where you say they should (specifically the scenarios where the risk of London getting nuked would only be ~2 times higher (or perhaps ~2-10 times higher) than what the risk was on March 6th). This is because even using the $100 per micromort value of life estimate, it would only cost $3,200/166.7=$20 extra per hour for an EA org to hire their full-time employee at that significantly higher productivity, and that seems like it would be clearly worth doing (if necessary) for many employees at EA orgs.
It seems hard to imagine how an EA would be willing to pay $100 to reduce the risk of death of someone by one micromort (which increases the life expectancy of someone with a 50 year life expectancy by 0.438 hours and the expected direct work of someone with 60,000 hours of direct work left in their career by 0.06 hours) and not also be willing to pay $20 to increase the expected direct work they do by 1 hour. The only thing I’m coming up with that might make this somewhat reasonable is if one thinks one life is much more valuable in a post-nuclear-war world than in the present world.
It might also more sense to just think of this in terms of expected valuable work hours saved and skip the step of assessing how much you should be willing to pay to reduce your risk of death by one micromort (since that’s probably roughly a function of the value of one’s work output anyway). Reducing one’s risk of death by 16 micromorts saves ~1 hour of valuable work in expectation if that person has 60,000 hours of valuable work left in their career (16/(10^6)*60,000=0.96). If leaving would cost you one hour of work in expectation, then it wasn’t worth leaving assuming the value of your life comes entirely from the value of your work output. This also ignores the difference in value of your life in a post-nuclear-war world compared to today’s world; you should perform an adjustment based on this.
A possible story on how the value of a longtermist’s life might be higher in a post-London-gets-nuked world than in today’s world (from my comment replying to Ben Todd’s comment on this Google Doc):
--------
I think what we actually care about is value of a life if London gets nuked relative to if it doesn’t rather than quality-adjusted life expectancy.
This might vary a lot depending on the person. E.g. For a typical person, life after London gets nuked is probably worth significantly less (as you say), but for a longtermist altruist it seems conceivable that life is actually worth more after a nuclear war. I’m not confident that’s the case in expectation (more research is needed), but here’s a possible story:
Perhaps after a Russia-US nuclear war that leaves London in ruins, existential risk this century is higher because China is more likely to create AGI than the West (relative to the world in which nuclear war didn’t occur) and because it’s true that China is less likely to solve AI alignment than the West. The marginal western longtermist might make more of a difference in expectation in the post-war world than in the world without war due to (1) the absolute existential risk being higher in the post-war world and (2) there being fewer qualified people alive in the post-war world who could meaningfully affect the development of AGI.
If the longtermist indeed makes more of a difference to raising the probability of a very long-lasting and positive future in the post-war world than in the normal-low-risk-of-nuclear-war world, then the value of their life is higher in the post-war world, and so it might make sense to use >50 years of life left for this highlighted estimate. Or alternatively, saving 7 hours of life expectancy in a post-war world might be more like saving 14 hours of life in a world with normal low nuclear risk (if the longtermist’s life is twice as valuable in the post-war world).
My response on Facebook to Rob Wiblin’s list of triggers for leaving London:
--------
Some major uncertainties:
(a) Risk of London getting nuked within a month conditional on each of these triggers
(b) Value of a life today (i.e. willingness to pay to reduce risk of death in a world with normal levels of nuclear risk)
(c) Value of a life in a post-London-gets-nuked world (i.e. willingness to pay to increase chance that Rob Wiblin survives London getting nuked)
(Note: (c) might be higher than b) if it’s the case that one can make more of a difference in the post-nuclear-war world in expectation.)
Using the 16 micromorts per month risk of death by nuke of staying in London estimate from March 6th[1] and assuming you’d be willing to pay $10-$100M[2] of your own money to avert your death (i.e. $10-$100 per micromort), that means on March 6th it would have made sense (while ignoring nuclear risk) to leave London for a month if you’d rather (taking into account altruistic impacts) leave London for a month than pay $160-$1,600 to stay for a month (or alternatively that you’d leave London for a month if someone paid you $160-$1,600 to do so).
I think that triggers 1-9 probably all increase the risk of London getting nuked to at least 2x what the risk was on March 6th, so assuming you’d be happy to leave for a month for $320-$3,200 (ignoring nuclear risk) (which seems reasonable to me if your productivity doesn’t take a significant hit), then I think I agree with your assessment of whether to leave.
However, it seems worth noting that for a lot of EAs working in London whose work would take a significant hit by leaving London, it is probably the case that they shouldn’t leave in some of the scenarios where you say they should (specifically the scenarios where the risk of London getting nuked would only be ~2 times higher (or perhaps ~2-10 times higher) than what the risk was on March 6th). This is because even using the $100 per micromort value of life estimate, it would only cost $3,200/166.7=$20 extra per hour for an EA org to hire their full-time employee at that significantly higher productivity, and that seems like it would be clearly worth doing (if necessary) for many employees at EA orgs.
It seems hard to imagine how an EA would be willing to pay $100 to reduce the risk of death of someone by one micromort (which increases the life expectancy of someone with a 50 year life expectancy by 0.438 hours and the expected direct work of someone with 60,000 hours of direct work left in their career by 0.06 hours) and not also be willing to pay $20 to increase the expected direct work they do by 1 hour. The only thing I’m coming up with that might make this somewhat reasonable is if one thinks one life is much more valuable in a post-nuclear-war world than in the present world.
It might also more sense to just think of this in terms of expected valuable work hours saved and skip the step of assessing how much you should be willing to pay to reduce your risk of death by one micromort (since that’s probably roughly a function of the value of one’s work output anyway). Reducing one’s risk of death by 16 micromorts saves ~1 hour of valuable work in expectation if that person has 60,000 hours of valuable work left in their career (16/(10^6)*60,000=0.96). If leaving would cost you one hour of work in expectation, then it wasn’t worth leaving assuming the value of your life comes entirely from the value of your work output. This also ignores the difference in value of your life in a post-nuclear-war world compared to today’s world; you should perform an adjustment based on this.
[1] https://docs.google.com/document/d/1xrLokMs6fjSdnCtI6u9P5IwaWlvUoniS-pF2ZDuWhCY/edit
A possible story on how the value of a longtermist’s life might be higher in a post-London-gets-nuked world than in today’s world (from my comment replying to Ben Todd’s comment on this Google Doc):
--------
I think what we actually care about is value of a life if London gets nuked relative to if it doesn’t rather than quality-adjusted life expectancy.
This might vary a lot depending on the person. E.g. For a typical person, life after London gets nuked is probably worth significantly less (as you say), but for a longtermist altruist it seems conceivable that life is actually worth more after a nuclear war. I’m not confident that’s the case in expectation (more research is needed), but here’s a possible story:
Perhaps after a Russia-US nuclear war that leaves London in ruins, existential risk this century is higher because China is more likely to create AGI than the West (relative to the world in which nuclear war didn’t occur) and because it’s true that China is less likely to solve AI alignment than the West. The marginal western longtermist might make more of a difference in expectation in the post-war world than in the world without war due to (1) the absolute existential risk being higher in the post-war world and (2) there being fewer qualified people alive in the post-war world who could meaningfully affect the development of AGI.
If the longtermist indeed makes more of a difference to raising the probability of a very long-lasting and positive future in the post-war world than in the normal-low-risk-of-nuclear-war world, then the value of their life is higher in the post-war world, and so it might make sense to use >50 years of life left for this highlighted estimate. Or alternatively, saving 7 hours of life expectancy in a post-war world might be more like saving 14 hours of life in a world with normal low nuclear risk (if the longtermist’s life is twice as valuable in the post-war world).