In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.
I think research on improving institutional quality, human cognition, and human decision-making, also quite easily cross this bar, have had successes in the past, and have opportunities for more work on the margin. I’ve written about why I think these things would constitute positive changes. But it’s also worth pointing out that if you think there is a 0.01% / year chance of doom now, then improvements in decision-making can probably be justified just by short-term impacts on probability of handling a catastrophe soon
How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk. Suppose that after some number of years T the per annum doom probability drops to 0. Then speeding up progress by 1 year reduces that number to T-1, reducing the cumulative probability of doom by the per annum doom probability. And this conclusion is unchanged if the doom probability continuously decreases rather than dropping to 0 all at once, or if there are many different kinds of doom, or whatever. It seems to be an extremely robust conclusion. Another way of seeing this is that speeding up progress by 1 year is equivalent to just pausing the natural doom-generating processes for a year, so naturally the goodness of a year of progress is equal to the badness of the doom-generating processes.
If you believe a 0.01% / year chance of doom by natural catastrophes, addressing response capabilities to those catastrophes in particular generally seems like it is going to easily dominate faster progress. On your model reducing our probability of doom from natural disasters by 1% over the next year is going to be comparable to increasing the overall rate of progress by 1% over the next year, and given the relative spending on those problems it would be surprising if the latter was cost-effective. I can imagine such a surprising outcome for some issues like encountering aliens (where it’s really not clear what we would do), but not for more plausible problems like asteroids or volcanos (since those act by climate disruptions, and there are general interventions to improve society’s robustness to massive climate disruptions).
Whether we’ve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future. I’m also willing to believe the risk of extremely severe outcomes is not very large, but you don’t have to be very large to beat 0.01% / year.
One reason I’m happy ignoring aliens is that the timing would have to work out very precisely for aliens to first visit us during any particular 10k year period given the overall timescales involved of at least hundreds of millions of years and probably billions. There are further reasons I discount this scenario, but the timing thing alone seems sufficient (modulo simulation-like hypotheses where the aliens periodically check up on Earth to see if it has advanced life which they should exterminate, or some other weird thing).
“In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.”
I don’t disagree on this point, except that I think there are better ways to maximise ‘resources available to avert doom in year 20xx’ than simply saving money and gaining interest.
“How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk.”
I basically agree, and I should have made that explicit myself. I only invoked specific numbers to highlight that 0.01% annual doom risk is actually pretty significant once we’re working on the relevant timescales, and therefore why I think it is plausible/likely that there will indeed be a year, one day, where the level of sophistication determines the entire future.
“Whether we’ve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future.”
That wasn’t the prediction I was trying to make at all, though on re-reading my post I can see why you might have thought I was. But the converse of ‘almost all problems...are of our own making’ is not ‘most problems are not of our own making’. There’s a large gap in the middle there, and indeed it’s not clear to me which will dominate in the future. I think external has dominated historically and marginally think it still dominates now, but what I’m much more convinced of is that we have good methods to attack it.
In other words, external doesn’t need to be much bigger than internal in the future to be the better thing to work on, all it needs to be is (a) non-trivial and (b) more tractable.
The rest of your post is suggesting specific alternative interventions. I’m open to the possibility that there is some specific intervention that is both more targeted and that is currently being overlooked. I think that conclusion is best reached by considering interventions or classes of interventions one at a time. But as a prior, it’s not obvious to me that this would be the case.
And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?
And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?
At those dates I think focusing on improving institutional decision-making would almost certainly beat trying to mitigate specific risks, and might well also beat focusing on general progress.
What would be an example? It’s quite possible I don’t disagree on this, because it’s very opaque to me what ‘improving institutional decision making’ would mean in practice.
Paul gives some examples of things he thinks are in this category today.
If we go back to these earlier dates, I guess I’d think of working to secure international cooperation, and perhaps to establishing better practices in things like governments and legal systems.
I think it’s hard to find easy concrete things in this category, as they tend to just get done, but with a bit of work progress is possible.
In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.
I think research on improving institutional quality, human cognition, and human decision-making, also quite easily cross this bar, have had successes in the past, and have opportunities for more work on the margin. I’ve written about why I think these things would constitute positive changes. But it’s also worth pointing out that if you think there is a 0.01% / year chance of doom now, then improvements in decision-making can probably be justified just by short-term impacts on probability of handling a catastrophe soon
How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk. Suppose that after some number of years T the per annum doom probability drops to 0. Then speeding up progress by 1 year reduces that number to T-1, reducing the cumulative probability of doom by the per annum doom probability. And this conclusion is unchanged if the doom probability continuously decreases rather than dropping to 0 all at once, or if there are many different kinds of doom, or whatever. It seems to be an extremely robust conclusion. Another way of seeing this is that speeding up progress by 1 year is equivalent to just pausing the natural doom-generating processes for a year, so naturally the goodness of a year of progress is equal to the badness of the doom-generating processes.
If you believe a 0.01% / year chance of doom by natural catastrophes, addressing response capabilities to those catastrophes in particular generally seems like it is going to easily dominate faster progress. On your model reducing our probability of doom from natural disasters by 1% over the next year is going to be comparable to increasing the overall rate of progress by 1% over the next year, and given the relative spending on those problems it would be surprising if the latter was cost-effective. I can imagine such a surprising outcome for some issues like encountering aliens (where it’s really not clear what we would do), but not for more plausible problems like asteroids or volcanos (since those act by climate disruptions, and there are general interventions to improve society’s robustness to massive climate disruptions).
Whether we’ve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future. I’m also willing to believe the risk of extremely severe outcomes is not very large, but you don’t have to be very large to beat 0.01% / year.
One reason I’m happy ignoring aliens is that the timing would have to work out very precisely for aliens to first visit us during any particular 10k year period given the overall timescales involved of at least hundreds of millions of years and probably billions. There are further reasons I discount this scenario, but the timing thing alone seems sufficient (modulo simulation-like hypotheses where the aliens periodically check up on Earth to see if it has advanced life which they should exterminate, or some other weird thing).
“In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.”
I don’t disagree on this point, except that I think there are better ways to maximise ‘resources available to avert doom in year 20xx’ than simply saving money and gaining interest.
“How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk.”
I basically agree, and I should have made that explicit myself. I only invoked specific numbers to highlight that 0.01% annual doom risk is actually pretty significant once we’re working on the relevant timescales, and therefore why I think it is plausible/likely that there will indeed be a year, one day, where the level of sophistication determines the entire future.
“Whether we’ve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future.”
That wasn’t the prediction I was trying to make at all, though on re-reading my post I can see why you might have thought I was. But the converse of ‘almost all problems...are of our own making’ is not ‘most problems are not of our own making’. There’s a large gap in the middle there, and indeed it’s not clear to me which will dominate in the future. I think external has dominated historically and marginally think it still dominates now, but what I’m much more convinced of is that we have good methods to attack it.
In other words, external doesn’t need to be much bigger than internal in the future to be the better thing to work on, all it needs to be is (a) non-trivial and (b) more tractable.
The rest of your post is suggesting specific alternative interventions. I’m open to the possibility that there is some specific intervention that is both more targeted and that is currently being overlooked. I think that conclusion is best reached by considering interventions or classes of interventions one at a time. But as a prior, it’s not obvious to me that this would be the case.
And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?
At those dates I think focusing on improving institutional decision-making would almost certainly beat trying to mitigate specific risks, and might well also beat focusing on general progress.
What would be an example? It’s quite possible I don’t disagree on this, because it’s very opaque to me what ‘improving institutional decision making’ would mean in practice.
Paul gives some examples of things he thinks are in this category today.
If we go back to these earlier dates, I guess I’d think of working to secure international cooperation, and perhaps to establishing better practices in things like governments and legal systems.
I think it’s hard to find easy concrete things in this category, as they tend to just get done, but with a bit of work progress is possible.