As the one who supervised him, I too think it’s a super exciting and useful piece of research! :)
I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:
Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the “global impatient optimum” from a “decentralized allocation” of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the “rate of pure time preference” on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
Seeing what happens when you replace the N^(epsilon—beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
Seeing what happens when you use a different growth model—in particular, one that doesn’t depend on population growth.
Yes, great paper and exciting work. Here are some further questions I’d be interested in (apologies if they result from misunderstanding the paper—I’ve only skimmed it once).
1) I’d love to see more work on Phil’s first bullet point above.
Would you guess that due to the global public good problem and impatience, that people with a low rate of pure rate of time preference will generally believe society is a long way from optimal allocation to safety, and therefore that increasing investment in safety is currently much higher impact than increasing growth?
2) What would be the impact of uncertainty about the parameters be? Should we act as if we’re generally in the eta > beta (but not much greater) regime, since that’s where altruists could have the most impact?
3) You look at the chance of humanity surviving indefinitely—but don’t we care more about something like the expected number of lives?
Might we be in the eta >> beta regime, but humanity still have a long future in expectation (i.e. tens of millions of years rather than billions). It might then still be very valuable to further extend the lifetime of civilisation, even if extinction is ultimately inevitable.
Or are there regimes where focusing on helping people in the short-term is the best thing to do?
Would looking at expected lifetime rather than probability of making it have other impacts on the conclusions? e.g. I could imagine it might be worth trading acceleration for a small increase in risk, so long as it allows more people to live in the interim in expectation.
Hi Ben, thanks for your kind words, and so sorry for the delayed response. Thanks for your questions!
Yes, this could definitely be the case. In terms of what the most effective intervention is, I don’t know. I agree that more work on this would be beneficial. One important consideration would be what intervention has the potential to raise the level of safety in the long run. Safety spending might only lead to a transitory increase in safety, or it could enable R&D that improves improves the level of safety in the long run. In the model, even slightly faster growth for a year means people are richer going forward forever, which in turn means people are willing to spend more on safety forever.
At least in terms of thinking about the impact of faster/slower growth, it seemed like the eta > beta case was the one we should focus on as you say (and this is what I do in the paper). When eta < beta, growth was unambiguously good; when eta >> beta, existential catastrophe was inevitable.
In terms of expected number of lives, it seems like the worlds in which humanity survives for a very long time are dramatically more valuable than any world in which existential catastrophe is inevitable. Nevertheless, I want to think more about potential cases where existential catastrophe might be inevitable, but there could still be a decently long future ahead. In particular, if we think humanity’s “growth mode” might change at some stage in the future, the relevant consideration might be the probability of reaching that stage, which could change the conclusions.
As the one who supervised him, I too think it’s a super exciting and useful piece of research! :)
I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:
Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the “global impatient optimum” from a “decentralized allocation” of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the “rate of pure time preference” on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
Seeing what happens when you replace the N^(epsilon—beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
Seeing what happens when you use a different growth model—in particular, one that doesn’t depend on population growth.
Yes, great paper and exciting work. Here are some further questions I’d be interested in (apologies if they result from misunderstanding the paper—I’ve only skimmed it once).
1) I’d love to see more work on Phil’s first bullet point above.
Would you guess that due to the global public good problem and impatience, that people with a low rate of pure rate of time preference will generally believe society is a long way from optimal allocation to safety, and therefore that increasing investment in safety is currently much higher impact than increasing growth?
2) What would be the impact of uncertainty about the parameters be? Should we act as if we’re generally in the eta > beta (but not much greater) regime, since that’s where altruists could have the most impact?
3) You look at the chance of humanity surviving indefinitely—but don’t we care more about something like the expected number of lives?
Might we be in the eta >> beta regime, but humanity still have a long future in expectation (i.e. tens of millions of years rather than billions). It might then still be very valuable to further extend the lifetime of civilisation, even if extinction is ultimately inevitable.
Or are there regimes where focusing on helping people in the short-term is the best thing to do?
Would looking at expected lifetime rather than probability of making it have other impacts on the conclusions? e.g. I could imagine it might be worth trading acceleration for a small increase in risk, so long as it allows more people to live in the interim in expectation.
Hi Ben, thanks for your kind words, and so sorry for the delayed response. Thanks for your questions!
Yes, this could definitely be the case. In terms of what the most effective intervention is, I don’t know. I agree that more work on this would be beneficial. One important consideration would be what intervention has the potential to raise the level of safety in the long run. Safety spending might only lead to a transitory increase in safety, or it could enable R&D that improves improves the level of safety in the long run. In the model, even slightly faster growth for a year means people are richer going forward forever, which in turn means people are willing to spend more on safety forever.
At least in terms of thinking about the impact of faster/slower growth, it seemed like the eta > beta case was the one we should focus on as you say (and this is what I do in the paper). When eta < beta, growth was unambiguously good; when eta >> beta, existential catastrophe was inevitable.
In terms of expected number of lives, it seems like the worlds in which humanity survives for a very long time are dramatically more valuable than any world in which existential catastrophe is inevitable. Nevertheless, I want to think more about potential cases where existential catastrophe might be inevitable, but there could still be a decently long future ahead. In particular, if we think humanity’s “growth mode” might change at some stage in the future, the relevant consideration might be the probability of reaching that stage, which could change the conclusions.
Thank you!