Why ‘once all risk-reducing measures are made’? Presumably what we care about is the marginal risk-reduction measure we can make on the margin?
I see no reason to think returns here are close to linear, since a reduction in the extinction rate from 0.2% to 0.1% (500 years → 1000 years) delivers half the benefits of going from 0.1% to 0.05% (1000 years → 2000 years) which is half of the benefits of going from 0.05% to 0.025%, etc. So my very weak prior on ‘marginal return on effort spent reducing extinction risk’ would be that they would be roughly exponentially increasing with the overall magnitude of resources thrown at the problem.
Which means I don’t think you can take the usual shortcut of saying ‘if 10% of world resources were spent on this it would be a great return on investment, and diminishing returns, so me spending 0.00001% of the worlds resources is also a great return’.
With that said, massively increasing returns is extremely unusual and feels intuitively odd so I’m very open to alternative models; this came up recently at a London EA discussion as a major objection to some of the magnitudes thrown around in x-risk causes, but I still don’t have a great sense of what alternative models might look like.
Yeah introducing diminishing returns into a model could change the impact by an order of magnitude but I’m trying to answer a more binary question: which
What I’m trying to look at is will an intervention to x-risk has a “long-run impact” e.g. either approx. the cosmic endowment or approx. the current milleneum. If you use a constant discount or an exponential discount, that’s going to make all of the difference. And if you think there’s some amount of existential risk that’s irreducible, that forces you to include some exponential discounting. So it’s kind-of different from where you’re trying to lead things.
Why ‘once all risk-reducing measures are made’? Presumably what we care about is the marginal risk-reduction measure we can make on the margin?
I see no reason to think returns here are close to linear, since a reduction in the extinction rate from 0.2% to 0.1% (500 years → 1000 years) delivers half the benefits of going from 0.1% to 0.05% (1000 years → 2000 years) which is half of the benefits of going from 0.05% to 0.025%, etc. So my very weak prior on ‘marginal return on effort spent reducing extinction risk’ would be that they would be roughly exponentially increasing with the overall magnitude of resources thrown at the problem.
Which means I don’t think you can take the usual shortcut of saying ‘if 10% of world resources were spent on this it would be a great return on investment, and diminishing returns, so me spending 0.00001% of the worlds resources is also a great return’.
With that said, massively increasing returns is extremely unusual and feels intuitively odd so I’m very open to alternative models; this came up recently at a London EA discussion as a major objection to some of the magnitudes thrown around in x-risk causes, but I still don’t have a great sense of what alternative models might look like.
Yeah introducing diminishing returns into a model could change the impact by an order of magnitude but I’m trying to answer a more binary question: which
What I’m trying to look at is will an intervention to x-risk has a “long-run impact” e.g. either approx. the cosmic endowment or approx. the current milleneum. If you use a constant discount or an exponential discount, that’s going to make all of the difference. And if you think there’s some amount of existential risk that’s irreducible, that forces you to include some exponential discounting. So it’s kind-of different from where you’re trying to lead things.