Looking at this paper now, I’m not convinced that Erdil and Besiroglu offer a good counter argument. Let me try to explain why and see if you disagree.
Their claim is about economic growth. It seems that they are exploring considerations for and against the claim that future AI systems will accelerate economic growth by an order of magnitude or more. But even if this was true, it doesn’t seem like it would result in a significant chance of extinction.
The main reason for believing the claim about economic growth doesn’t apply to stronger versions of the singularity hypothesis. As far as I can tell, the main reason to believe that the economic growth will happen is that AI might be able to automate most or all of the work done by human workers today. However, it seems like further argument is needed to claim that AI will also be smart enough to overpower us.
They give additional considerations against the singularity hypothesis. While the strongest arguments in favour of rapid economic growth don’t apply to the stronger singularity hypothesis, I think they present arguments against which do apply. The most interesting ones for me were that regulation could slow down AI development, we may not deploy powerful AI systems due to concerns about alignment, and previous seemingly revolutionary technologies like computers, electricity, cars and aeroplanes arguably didn’t lead to large accelerations in economic growth.
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks. 2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.
Looking at this paper now, I’m not convinced that Erdil and Besiroglu offer a good counter argument. Let me try to explain why and see if you disagree.
Their claim is about economic growth. It seems that they are exploring considerations for and against the claim that future AI systems will accelerate economic growth by an order of magnitude or more. But even if this was true, it doesn’t seem like it would result in a significant chance of extinction.
The main reason for believing the claim about economic growth doesn’t apply to stronger versions of the singularity hypothesis. As far as I can tell, the main reason to believe that the economic growth will happen is that AI might be able to automate most or all of the work done by human workers today. However, it seems like further argument is needed to claim that AI will also be smart enough to overpower us.
They give additional considerations against the singularity hypothesis. While the strongest arguments in favour of rapid economic growth don’t apply to the stronger singularity hypothesis, I think they present arguments against which do apply. The most interesting ones for me were that regulation could slow down AI development, we may not deploy powerful AI systems due to concerns about alignment, and previous seemingly revolutionary technologies like computers, electricity, cars and aeroplanes arguably didn’t lead to large accelerations in economic growth.
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks.
2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.