I’m curious about potential methodological approaches to answering this question:
Arrive at a possible lower bound for the value of averting x-risk by thinking about how much one is willing to pay to save present people, like in Khorton’s answer.
Arrive at a possible lower bound by thinking about how much is willing to pay for current and discounted future people
Thinking about what EA is currently paying for similar risk reductions, and arguing that one should be willing to pay at least as much for future risk-reduction opportunities
I’m unsure about this, but I think this is most of what’s going on with Linch’s intuitions.
Overall, I agree that this question is important, but current approaches don’t really convince me.
My intuition about what would convince me would be some really hardcore and robust modeling coming out of e.g., GPI taking into account both increased resources over time and increased risk. Right now the closest published thing that exists might be Existential risk and growth and Existential Risk and Exogenous Growth—but this is inadequate for our purposes because it considers stuff at the global rather than at the movement level—and the closest unpublished thing that exists are some models I’ve heard about that I hope will get published soon.
I’m curious about potential methodological approaches to answering this question:
Arrive at a possible lower bound for the value of averting x-risk by thinking about how much one is willing to pay to save present people, like in Khorton’s answer.
Arrive at a possible lower bound by thinking about how much is willing to pay for current and discounted future people
Thinking about what EA is currently paying for similar risk reductions, and arguing that one should be willing to pay at least as much for future risk-reduction opportunities
I’m unsure about this, but I think this is most of what’s going on with Linch’s intuitions.
Overall, I agree that this question is important, but current approaches don’t really convince me.
My intuition about what would convince me would be some really hardcore and robust modeling coming out of e.g., GPI taking into account both increased resources over time and increased risk. Right now the closest published thing that exists might be Existential risk and growth and Existential Risk and Exogenous Growth—but this is inadequate for our purposes because it considers stuff at the global rather than at the movement level—and the closest unpublished thing that exists are some models I’ve heard about that I hope will get published soon.