I think this is a useful analysis and it reveals to me how opposed to the ‘indefinite moratorium’ proposal I think I am.
To get the moratorium lasting indefinitely, there seems to be a dilemma between:
General scientific/​technological progress stops permanently along with AGI progress stopping permanently, or
We create a Bostrom-style arrangement where all people and all computers are constantly surveilled to prevent any AI-relevant advances from occurring.
I think both of these would constitute an existential catastrophe (in the traditional sense of failing to realise most of humanity’s potential). 1 seems clearly bad (outside of e.g. a degrowth perspective). 2 would be necessary as if not 1 and science and tech is progressing but just AGI is not, then it will become increasingly easy to build AGI (or at least make progress towards it) and we will need more and more restrictive mechanisms to prevent people from doing so. Probably we just cannot access many regions of science-technology space without superintelligence, so eventually 2 may morph into 1 as well.
So I think for me a moratorium for years or decades could be very good and valuable, but only if we eventually lift it.
I would be interested in anyone taking the position that AGI/​superintelligence should never be built, and potentially writing an EAF dialogue together or otherwise trying to understand each other better.
I think I am unsure how long it is possible for an indefinite moratorium to last, but I think I probably fall, and increasingly fall, much closer to supporting it than I guess you do.
In answer to these specific points, I basically seem maintaining a moratorium as an example of Differential Technology Development. As long as the technologies that we can use to maintain a moratorium (both physical and social technologies) outpace the rate of progress towards ASI, we can maintain the moratorium. I do think this would require drastically slowing down a specific subset of scientific progress in the long term, but am not convinced it would be so general as you suggest. I guess this is some mixture of both 1 and 2, although with both I do think this means that neither position ends up being so extreme.
In answer to your normative judgement, if 1 allows a flourishing future, which I think a drastically slowed sense of progress could, then it seems desirable from a longtermist perspective. I’m also really unsure that, with sufficient time, we can’t access significant parts of technology space without an agentic ASI, particularly if we increase our defences against an agentic ASI using technologies like narrow AIs sufficiently. It also strikes me that assigning significant normative value to accessing all areas (or even extremely large areas) of science and technology space seems like a value set that is related to ‘progress’/​transhumanism as an end of itself, rather than a means to an end (like totalist utilitarians with transhumanist bents do).
For me, its really hard to tell how long we could hold a moratroium for, and how long its desireable. But certainly, if feasible, it seems timescales well beyond decades would be very desirable
I think this is a useful analysis and it reveals to me how opposed to the ‘indefinite moratorium’ proposal I think I am.
To get the moratorium lasting indefinitely, there seems to be a dilemma between:
General scientific/​technological progress stops permanently along with AGI progress stopping permanently, or
We create a Bostrom-style arrangement where all people and all computers are constantly surveilled to prevent any AI-relevant advances from occurring.
I think both of these would constitute an existential catastrophe (in the traditional sense of failing to realise most of humanity’s potential). 1 seems clearly bad (outside of e.g. a degrowth perspective). 2 would be necessary as if not 1 and science and tech is progressing but just AGI is not, then it will become increasingly easy to build AGI (or at least make progress towards it) and we will need more and more restrictive mechanisms to prevent people from doing so. Probably we just cannot access many regions of science-technology space without superintelligence, so eventually 2 may morph into 1 as well.
So I think for me a moratorium for years or decades could be very good and valuable, but only if we eventually lift it.
I would be interested in anyone taking the position that AGI/​superintelligence should never be built, and potentially writing an EAF dialogue together or otherwise trying to understand each other better.
I think I am unsure how long it is possible for an indefinite moratorium to last, but I think I probably fall, and increasingly fall, much closer to supporting it than I guess you do.
In answer to these specific points, I basically seem maintaining a moratorium as an example of Differential Technology Development. As long as the technologies that we can use to maintain a moratorium (both physical and social technologies) outpace the rate of progress towards ASI, we can maintain the moratorium. I do think this would require drastically slowing down a specific subset of scientific progress in the long term, but am not convinced it would be so general as you suggest. I guess this is some mixture of both 1 and 2, although with both I do think this means that neither position ends up being so extreme.
In answer to your normative judgement, if 1 allows a flourishing future, which I think a drastically slowed sense of progress could, then it seems desirable from a longtermist perspective. I’m also really unsure that, with sufficient time, we can’t access significant parts of technology space without an agentic ASI, particularly if we increase our defences against an agentic ASI using technologies like narrow AIs sufficiently. It also strikes me that assigning significant normative value to accessing all areas (or even extremely large areas) of science and technology space seems like a value set that is related to ‘progress’/​transhumanism as an end of itself, rather than a means to an end (like totalist utilitarians with transhumanist bents do).
For me, its really hard to tell how long we could hold a moratroium for, and how long its desireable. But certainly, if feasible, it seems timescales well beyond decades would be very desirable