I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks. 2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.
I suspect a lot of the disagreement here is about whether the singularity hypothesis is along the lines of:
1. AI becomes capable enough to do lots or most economically useful tasks.
2. AI becomes capable enough to directly manipulate and overpower all humans, regardless of our efforts to resist and steer the future in a directions that good for us.
I think of the singularity hypothesis as being along the lines of “growth will accelerate a lot”. I might operationalize this as predicting that the economy will go up by more than a factor of 10 in the period of a decade. (This threshold deliberately chosen to be pretty tame by singularity predictions, but pretty wild by regular predictions.)
I think this is pretty clearly stronger than your 1 but weaker than your 2. (It might be close to predicting that AI systems become much smarter than humans without access to computers or AI tools, but this is compatible with humans remaining easily and robustly in control.)
I think this growth-centred hypothesis is important and deserves a name, and “singularity” is a particularly good name for it. Your 1 and 2 also seem like they could use names, but I think they’re easier to describe with alternate names, like “mass automation of labour” or “existential risk from misaligned AI”.