I’d again prefer to frame the issue as “what are the downsides from spending marginal resources on efforts to slow down?” I think the main downside, from this marginal perspective, is opportunity costs in terms of other efforts to reduce future risks, e.g. trying to implement “fail-safe measures”/”separation from hyperexistential risk” in case a slowdown is insufficiently likely to be successful. There are various ideas that one could try to implement.
In other words, a serious downside of betting chiefly on efforts to slow down over these alternative options could be that these s-risks/hyperexistential risks would end up being significantly greater in counterfactual terms (again, not saying this is clearly the case, but, FWIW, I doubt that efforts to slow down are among the most effective ways to reduce risks like these).
a fast software-driven takeoff is the most likely scenario
I don’t think you need to believe this to want to be slamming on the breaks on now.
Didn’t mean to say that that’s a necessary condition for wanting to slow down. But again, I still think it’s highly unclear whether efforts that push for slower progress are more beneficial than alternative efforts.
I think it’s a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing “fates worse than death”. At that point most people would probably just be pretty nihilistic. It also feels like it’s not far off basically just giving up hope: the future is, at best, non-existence for sentient life; but we should still focus our efforts on avoiding hell. Nope. We should be doing all we can now to avoid having to face such a predicament! Global moratorium on AGI, now.
I think it’s a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing “fates worse than death”.
I’m not talking about people sacrificing themselves or the whole world. Even if we were to adopt a purely survivalist perspective, I think it’s still far from obvious that trying to slow things down is more effective than is focusing on other aims. After all, the space of alternative aims that one could focus on is vast, and trying to slow things down comes with non-trivial risks of its own (e.g. risks of backlash from tech-accelerationists). Again, I’m not saying it’s clear; I’m saying that it seems to me unclear either way.
We should be doing all we can now to avoid having to face such a predicament!
But, as I see it, what’s at issue is what the best way is to avoid such a predicament/how to best navigate given our current all-too risky predicament.
FWIW, I think that a lot of the discussion around this issue appears strongly fear-driven, to such an extent that it seems to get in the way of sober and helpful analysis. This is, to be sure, extremely understandable. But I also suspect that it is not the optimal way to figure out how to best achieve our aims, nor an effective way to persuade readers on this forum. Likewise, I suspect that rallying calls along the lines of “Global moratorium on AGI, now” might generally be received less well than would, say, a deeper analysis of the reasons for and against attempts to institute that policy.
I feel like I’m one of the main characters in the film Don’t Look Up here.
the space of alternative aims that one could focus on is vast
Please can you name 10? The way I see it is—either alignment is solved in time with business as usual[1], or we Pause to allow time for alignment to be solved (or establish it’s impossibility). It is not a complicated situation. No need to be worrying about “fates worse than death” at this juncture.
I’d again prefer to frame the issue as “what are the downsides from spending marginal resources on efforts to slow down?” I think the main downside, from this marginal perspective, is opportunity costs in terms of other efforts to reduce future risks, e.g. trying to implement “fail-safe measures”/”separation from hyperexistential risk” in case a slowdown is insufficiently likely to be successful. There are various ideas that one could try to implement.
In other words, a serious downside of betting chiefly on efforts to slow down over these alternative options could be that these s-risks/hyperexistential risks would end up being significantly greater in counterfactual terms (again, not saying this is clearly the case, but, FWIW, I doubt that efforts to slow down are among the most effective ways to reduce risks like these).
Didn’t mean to say that that’s a necessary condition for wanting to slow down. But again, I still think it’s highly unclear whether efforts that push for slower progress are more beneficial than alternative efforts.
I think it’s a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing “fates worse than death”. At that point most people would probably just be pretty nihilistic. It also feels like it’s not far off basically just giving up hope: the future is, at best, non-existence for sentient life; but we should still focus our efforts on avoiding hell. Nope. We should be doing all we can now to avoid having to face such a predicament! Global moratorium on AGI, now.
I’m not talking about people sacrificing themselves or the whole world. Even if we were to adopt a purely survivalist perspective, I think it’s still far from obvious that trying to slow things down is more effective than is focusing on other aims. After all, the space of alternative aims that one could focus on is vast, and trying to slow things down comes with non-trivial risks of its own (e.g. risks of backlash from tech-accelerationists). Again, I’m not saying it’s clear; I’m saying that it seems to me unclear either way.
But, as I see it, what’s at issue is what the best way is to avoid such a predicament/how to best navigate given our current all-too risky predicament.
FWIW, I think that a lot of the discussion around this issue appears strongly fear-driven, to such an extent that it seems to get in the way of sober and helpful analysis. This is, to be sure, extremely understandable. But I also suspect that it is not the optimal way to figure out how to best achieve our aims, nor an effective way to persuade readers on this forum. Likewise, I suspect that rallying calls along the lines of “Global moratorium on AGI, now” might generally be received less well than would, say, a deeper analysis of the reasons for and against attempts to institute that policy.
I feel like I’m one of the main characters in the film Don’t Look Up here.
Please can you name 10? The way I see it is—either alignment is solved in time with business as usual[1], or we Pause to allow time for alignment to be solved (or establish it’s impossibility). It is not a complicated situation. No need to be worrying about “fates worse than death” at this juncture.
seems highly unlikely, but please say if you think there are promising solutions here