ā2. Why should progress continue indefinitely? Maybe there will be progress until 2100, and the level of sophistication in that year will determine the entire future?
This scenario just seems to strain plausibility. Again, almost all ways that progress could plausibly stop donāt depend on the calendar year, but are driven by human activities (and presumably some intermediating technological progress).ā
I think this is where I disagree with you most: I donāt think this strains plausibility at all, and in fact I think the statement as given is basically true up to the year. Examples of events that seem determined by ācalendar yearā rather than by the level of progress, but where likelihood of survival seems dependent on the level of progress:
First, second, or later contact with intelligent alien life (especially in a āthey find usā scenario, rather than vice-versa).
Asteroid impacts, super-volcanoes, and another ānaturalā disasters.
Iām sure people more imaginative than me can think of others.
You mentioned the latter category explicitly and noted that they are very rare. I agree. But āvery rareā still means āmathematically guaranteed eventuallyā, so eventually there will be a year X where the level of sophistication does in fact determine the entire future. 1 concerns me more than 2 though, since it seems more of a āmake-or-breakā event, and more guaranteed (Aside: Iām always a bit surprised that contact with alien life doesnāt seem to turn up much in discussions of the very-far-future. Itās normally my primary consideration when thinking that far ahead, since I view it as incredibly important to what the long-run looks like and incredibly likely given sufficient time).
This is obviously related to a broader disagreement, which is whether the threats and constraints on humanity are primarily external (disease, cosmic issues, aliens, maybe even resource issues) or internal (AI, nukes, engineered microbes). I lean strongly towards external.
Against an unknown external threat, speeding up broad-based progress is a sensible response. It has so far been the case the primary things hurting humanity have been external, so up to now broad-based progress has been very powerful. You donāt appear to disagree with this. Your actual disagreement is, I think, contained in the sentence āFor better or worse, almost all problems with the potential to permanently change human affairs are of our own making.ā
Arguing against that would take me somewhat out of the scope of this thread and make this comment even longer than it already is. But it seems clear to me that that assertion is not one that many of those you are arguing against would agree with, and without it, I donāt think the rest of your argument holds.
My estimates for [1] and [2] together are less than 0.01% per year (for [1], they are very much lower!) So the quantitative effect of speeding up progress, via these channels, is quite small. You would have to be very pessimistic about the plausible trajectory-changing impacts of available interventions before such a small effect was relevant.
I can believe that most of the problems faced by people today are external, and indeed this is related to why I find this disagreement such a painful one. But why do you think that the long-term constraints are external? Iāve never seen a really plausible quantitative argument for this. Resource limitations are the closest, though (1) Iām quite skeptical as a very long-term factor, but this comes down mostly to views on the likely rate of technological progress over the next centuries, and (2) describing those as āexternalā rather than āa consequence of progressā is already pushing it.
I think that the āman-madeā troubles have already easily surpassed the natural worries, via the risk of nuclear annihilation and anthropogenic climate change. Do you disagree with this? What natural problems do you think might be competitive? The unknown unknown?
TLDR; We can robustly increase the speed of progress. We can also establish that, contrary to the argument above, increasing the speed of progress has non-trivial very-long-run value. I donāt see how we can robustly change the direction of progress. I also donāt see how we could robustly know that the direction we move progress in is valuable, even if we could robustly change the direction of progress.
At the risk of stating something we both know, 0.01% per year is not at all āsmallā in this context. My estimate would actually be lower, and I still think my argument holds. Thatās because I think progress will continue for many thousands of years. I agree with you that eventually it has to stop or dwindle into insignificance. But all I need to establish is that there is a non-trivial chance that it does not stop before one of the make-or-break events. If progress is currently set to continue for at least 1000 more years and there is indeed a 0.01% chance per year, there is at least a 10% chance of that scenario (the one that you said āstrains plausibilityā). 1000 years just isnāt a very long time. And once I have that, speeding up progress becomes very valuable again.
āYou would have to be very pessimistic about the plausible trajectory-changing impacts of available interventions before such a small effect was relevant.ā
I am actually very pessimistic about these, because itās not clear to me that we have any examples of trying to do more of this on the margin working. We do have examples of attempts to speed up progress on the margin working.
Assuming that by āanthropogenic climate changeā you essentially mean rising CO2 levels, I donāt actually rate climate change as an x-risk, so itās not obvious to me that it belongs in this discussion. I rate it as something which could cause a great deal of suffering and, as a side-effect of that, slow down progress, but not as something that brings about the āend of the world as we know itā. If it is an x-risk, then in a sense my response is āwell, thatās too badā, since I also view it as inevitable; all the evidence Iām aware of suggests that we collectively missed the boat on this one some time ago. In fact, because weāve missed the boat already, the best thing to do right now to combat it is very likely to be to speed up progress so as to be better placed to deal with the problems when they come.
In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.
I think research on improving institutional quality, human cognition, and human decision-making, also quite easily cross this bar, have had successes in the past, and have opportunities for more work on the margin. Iāve written about why I think these things would constitute positive changes. But itās also worth pointing out that if you think there is a 0.01% /ā year chance of doom now, then improvements in decision-making can probably be justified just by short-term impacts on probability of handling a catastrophe soon
How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk. Suppose that after some number of years T the per annum doom probability drops to 0. Then speeding up progress by 1 year reduces that number to T-1, reducing the cumulative probability of doom by the per annum doom probability. And this conclusion is unchanged if the doom probability continuously decreases rather than dropping to 0 all at once, or if there are many different kinds of doom, or whatever. It seems to be an extremely robust conclusion. Another way of seeing this is that speeding up progress by 1 year is equivalent to just pausing the natural doom-generating processes for a year, so naturally the goodness of a year of progress is equal to the badness of the doom-generating processes.
If you believe a 0.01% /ā year chance of doom by natural catastrophes, addressing response capabilities to those catastrophes in particular generally seems like it is going to easily dominate faster progress. On your model reducing our probability of doom from natural disasters by 1% over the next year is going to be comparable to increasing the overall rate of progress by 1% over the next year, and given the relative spending on those problems it would be surprising if the latter was cost-effective. I can imagine such a surprising outcome for some issues like encountering aliens (where itās really not clear what we would do), but not for more plausible problems like asteroids or volcanos (since those act by climate disruptions, and there are general interventions to improve societyās robustness to massive climate disruptions).
Whether weāve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future. Iām also willing to believe the risk of extremely severe outcomes is not very large, but you donāt have to be very large to beat 0.01% /ā year.
One reason Iām happy ignoring aliens is that the timing would have to work out very precisely for aliens to first visit us during any particular 10k year period given the overall timescales involved of at least hundreds of millions of years and probably billions. There are further reasons I discount this scenario, but the timing thing alone seems sufficient (modulo simulation-like hypotheses where the aliens periodically check up on Earth to see if it has advanced life which they should exterminate, or some other weird thing).
āIn the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.ā
I donāt disagree on this point, except that I think there are better ways to maximise āresources available to avert doom in year 20xxā than simply saving money and gaining interest.
āHow long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk.ā
I basically agree, and I should have made that explicit myself. I only invoked specific numbers to highlight that 0.01% annual doom risk is actually pretty significant once weāre working on the relevant timescales, and therefore why I think it is plausible/ālikely that there will indeed be a year, one day, where the level of sophistication determines the entire future.
āWhether weāve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future.ā
That wasnāt the prediction I was trying to make at all, though on re-reading my post I can see why you might have thought I was. But the converse of āalmost all problems...are of our own makingā is not āmost problems are not of our own makingā. Thereās a large gap in the middle there, and indeed itās not clear to me which will dominate in the future. I think external has dominated historically and marginally think it still dominates now, but what Iām much more convinced of is that we have good methods to attack it.
In other words, external doesnāt need to be much bigger than internal in the future to be the better thing to work on, all it needs to be is (a) non-trivial and (b) more tractable.
The rest of your post is suggesting specific alternative interventions. Iām open to the possibility that there is some specific intervention that is both more targeted and that is currently being overlooked. I think that conclusion is best reached by considering interventions or classes of interventions one at a time. But as a prior, itās not obvious to me that this would be the case.
And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?
And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?
At those dates I think focusing on improving institutional decision-making would almost certainly beat trying to mitigate specific risks, and might well also beat focusing on general progress.
What would be an example? Itās quite possible I donāt disagree on this, because itās very opaque to me what āimproving institutional decision makingā would mean in practice.
Paul gives some examples of things he thinks are in this category today.
If we go back to these earlier dates, I guess Iād think of working to secure international cooperation, and perhaps to establishing better practices in things like governments and legal systems.
I think itās hard to find easy concrete things in this category, as they tend to just get done, but with a bit of work progress is possible.
You talk about progress today being of macroscopic relevance if we get an exogenous make-or-break event in the period where progress is continuing. I think it should really be if we get such an event in the period where progress is continuing exponentially. If weāve moved into a phase of polynomial growth (plausible for instance if our growth is coming from spreading out spatially) then it seems less valuable. Iām relying here on a view that our (subjective) chances of dealing with such events scale with the logarithm of our resources. I donāt think that this changes your qualitative point.
I do think that endogenous risk over the next couple of centuries is of at least comparable size as exogenous risk over the period before exponential growth. I think that increasing the resources devoted to dealing with endogenous risk by 1% will reduce these risks by a similar amount that increases prosperity by 1% will reduce long-term exogenous risks. And I think itās probably easier to get that 1% increase in the first case than the second one, in large part because there are a lot fewer people already trying to do that.
ā2. Why should progress continue indefinitely? Maybe there will be progress until 2100, and the level of sophistication in that year will determine the entire future?
This scenario just seems to strain plausibility. Again, almost all ways that progress could plausibly stop donāt depend on the calendar year, but are driven by human activities (and presumably some intermediating technological progress).ā
I think this is where I disagree with you most: I donāt think this strains plausibility at all, and in fact I think the statement as given is basically true up to the year. Examples of events that seem determined by ācalendar yearā rather than by the level of progress, but where likelihood of survival seems dependent on the level of progress:
First, second, or later contact with intelligent alien life (especially in a āthey find usā scenario, rather than vice-versa).
Asteroid impacts, super-volcanoes, and another ānaturalā disasters.
Iām sure people more imaginative than me can think of others.
You mentioned the latter category explicitly and noted that they are very rare. I agree. But āvery rareā still means āmathematically guaranteed eventuallyā, so eventually there will be a year X where the level of sophistication does in fact determine the entire future. 1 concerns me more than 2 though, since it seems more of a āmake-or-breakā event, and more guaranteed (Aside: Iām always a bit surprised that contact with alien life doesnāt seem to turn up much in discussions of the very-far-future. Itās normally my primary consideration when thinking that far ahead, since I view it as incredibly important to what the long-run looks like and incredibly likely given sufficient time).
This is obviously related to a broader disagreement, which is whether the threats and constraints on humanity are primarily external (disease, cosmic issues, aliens, maybe even resource issues) or internal (AI, nukes, engineered microbes). I lean strongly towards external.
Against an unknown external threat, speeding up broad-based progress is a sensible response. It has so far been the case the primary things hurting humanity have been external, so up to now broad-based progress has been very powerful. You donāt appear to disagree with this. Your actual disagreement is, I think, contained in the sentence āFor better or worse, almost all problems with the potential to permanently change human affairs are of our own making.ā
Arguing against that would take me somewhat out of the scope of this thread and make this comment even longer than it already is. But it seems clear to me that that assertion is not one that many of those you are arguing against would agree with, and without it, I donāt think the rest of your argument holds.
My estimates for [1] and [2] together are less than 0.01% per year (for [1], they are very much lower!) So the quantitative effect of speeding up progress, via these channels, is quite small. You would have to be very pessimistic about the plausible trajectory-changing impacts of available interventions before such a small effect was relevant.
I can believe that most of the problems faced by people today are external, and indeed this is related to why I find this disagreement such a painful one. But why do you think that the long-term constraints are external? Iāve never seen a really plausible quantitative argument for this. Resource limitations are the closest, though (1) Iām quite skeptical as a very long-term factor, but this comes down mostly to views on the likely rate of technological progress over the next centuries, and (2) describing those as āexternalā rather than āa consequence of progressā is already pushing it.
I think that the āman-madeā troubles have already easily surpassed the natural worries, via the risk of nuclear annihilation and anthropogenic climate change. Do you disagree with this? What natural problems do you think might be competitive? The unknown unknown?
Edited for TLDR:
TLDR; We can robustly increase the speed of progress. We can also establish that, contrary to the argument above, increasing the speed of progress has non-trivial very-long-run value. I donāt see how we can robustly change the direction of progress. I also donāt see how we could robustly know that the direction we move progress in is valuable, even if we could robustly change the direction of progress.
At the risk of stating something we both know, 0.01% per year is not at all āsmallā in this context. My estimate would actually be lower, and I still think my argument holds. Thatās because I think progress will continue for many thousands of years. I agree with you that eventually it has to stop or dwindle into insignificance. But all I need to establish is that there is a non-trivial chance that it does not stop before one of the make-or-break events. If progress is currently set to continue for at least 1000 more years and there is indeed a 0.01% chance per year, there is at least a 10% chance of that scenario (the one that you said āstrains plausibilityā). 1000 years just isnāt a very long time. And once I have that, speeding up progress becomes very valuable again.
āYou would have to be very pessimistic about the plausible trajectory-changing impacts of available interventions before such a small effect was relevant.ā
I am actually very pessimistic about these, because itās not clear to me that we have any examples of trying to do more of this on the margin working. We do have examples of attempts to speed up progress on the margin working.
Assuming that by āanthropogenic climate changeā you essentially mean rising CO2 levels, I donāt actually rate climate change as an x-risk, so itās not obvious to me that it belongs in this discussion. I rate it as something which could cause a great deal of suffering and, as a side-effect of that, slow down progress, but not as something that brings about the āend of the world as we know itā. If it is an x-risk, then in a sense my response is āwell, thatās too badā, since I also view it as inevitable; all the evidence Iām aware of suggests that we collectively missed the boat on this one some time ago. In fact, because weāve missed the boat already, the best thing to do right now to combat it is very likely to be to speed up progress so as to be better placed to deal with the problems when they come.
In the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.
I think research on improving institutional quality, human cognition, and human decision-making, also quite easily cross this bar, have had successes in the past, and have opportunities for more work on the margin. Iāve written about why I think these things would constitute positive changes. But itās also worth pointing out that if you think there is a 0.01% /ā year chance of doom now, then improvements in decision-making can probably be justified just by short-term impacts on probability of handling a catastrophe soon
How long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk. Suppose that after some number of years T the per annum doom probability drops to 0. Then speeding up progress by 1 year reduces that number to T-1, reducing the cumulative probability of doom by the per annum doom probability. And this conclusion is unchanged if the doom probability continuously decreases rather than dropping to 0 all at once, or if there are many different kinds of doom, or whatever. It seems to be an extremely robust conclusion. Another way of seeing this is that speeding up progress by 1 year is equivalent to just pausing the natural doom-generating processes for a year, so naturally the goodness of a year of progress is equal to the badness of the doom-generating processes.
If you believe a 0.01% /ā year chance of doom by natural catastrophes, addressing response capabilities to those catastrophes in particular generally seems like it is going to easily dominate faster progress. On your model reducing our probability of doom from natural disasters by 1% over the next year is going to be comparable to increasing the overall rate of progress by 1% over the next year, and given the relative spending on those problems it would be surprising if the latter was cost-effective. I can imagine such a surprising outcome for some issues like encountering aliens (where itās really not clear what we would do), but not for more plausible problems like asteroids or volcanos (since those act by climate disruptions, and there are general interventions to improve societyās robustness to massive climate disruptions).
Whether weāve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future. Iām also willing to believe the risk of extremely severe outcomes is not very large, but you donāt have to be very large to beat 0.01% /ā year.
One reason Iām happy ignoring aliens is that the timing would have to work out very precisely for aliens to first visit us during any particular 10k year period given the overall timescales involved of at least hundreds of millions of years and probably billions. There are further reasons I discount this scenario, but the timing thing alone seems sufficient (modulo simulation-like hypotheses where the aliens periodically check up on Earth to see if it has advanced life which they should exterminate, or some other weird thing).
āIn the worst case, you personally saving money and spending it later (in the worst case, to hasten progress at a time when the annual risk of doom has increased!) seems very likely to beat 0.01%, unless you have pretty confident views about the future.ā
I donāt disagree on this point, except that I think there are better ways to maximise āresources available to avert doom in year 20xxā than simply saving money and gaining interest.
āHow long progress goes on for before stopping seems irrelevant to its value, according to the model you described. The value of an extra year of progress is always the per annum doom risk.ā
I basically agree, and I should have made that explicit myself. I only invoked specific numbers to highlight that 0.01% annual doom risk is actually pretty significant once weāre working on the relevant timescales, and therefore why I think it is plausible/ālikely that there will indeed be a year, one day, where the level of sophistication determines the entire future.
āWhether weāve missed the boat on climate change or not, it would undermine your claim that historically the main problems have been nature-made and not man-made (which I find implausible), which you were invoking to justify the prediction that the same will hold true in the future.ā
That wasnāt the prediction I was trying to make at all, though on re-reading my post I can see why you might have thought I was. But the converse of āalmost all problems...are of our own makingā is not āmost problems are not of our own makingā. Thereās a large gap in the middle there, and indeed itās not clear to me which will dominate in the future. I think external has dominated historically and marginally think it still dominates now, but what Iām much more convinced of is that we have good methods to attack it.
In other words, external doesnāt need to be much bigger than internal in the future to be the better thing to work on, all it needs to be is (a) non-trivial and (b) more tractable.
The rest of your post is suggesting specific alternative interventions. Iām open to the possibility that there is some specific intervention that is both more targeted and that is currently being overlooked. I think that conclusion is best reached by considering interventions or classes of interventions one at a time. But as a prior, itās not obvious to me that this would be the case.
And on that note, do you think that the above was the case in 1900? 1800? Has it always been the case that focusing on mitigating a particular category of risk is better than focusing on general progress? Or have we passed some kind of tipping point which now makes it so?
At those dates I think focusing on improving institutional decision-making would almost certainly beat trying to mitigate specific risks, and might well also beat focusing on general progress.
What would be an example? Itās quite possible I donāt disagree on this, because itās very opaque to me what āimproving institutional decision makingā would mean in practice.
Paul gives some examples of things he thinks are in this category today.
If we go back to these earlier dates, I guess Iād think of working to secure international cooperation, and perhaps to establishing better practices in things like governments and legal systems.
I think itās hard to find easy concrete things in this category, as they tend to just get done, but with a bit of work progress is possible.
You talk about progress today being of macroscopic relevance if we get an exogenous make-or-break event in the period where progress is continuing. I think it should really be if we get such an event in the period where progress is continuing exponentially. If weāve moved into a phase of polynomial growth (plausible for instance if our growth is coming from spreading out spatially) then it seems less valuable. Iām relying here on a view that our (subjective) chances of dealing with such events scale with the logarithm of our resources. I donāt think that this changes your qualitative point.
I do think that endogenous risk over the next couple of centuries is of at least comparable size as exogenous risk over the period before exponential growth. I think that increasing the resources devoted to dealing with endogenous risk by 1% will reduce these risks by a similar amount that increases prosperity by 1% will reduce long-term exogenous risks. And I think itās probably easier to get that 1% increase in the first case than the second one, in large part because there are a lot fewer people already trying to do that.