I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?”
As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?”
More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc.
As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values.
I’ll end with this—if indefinite delay turns out to increase our x-risk (or if we just can’t do it for sociopolitical reasons), then I truly envy those who were born before 1920—they never had to see the storm that’s coming.
I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?”
As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?”
More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc.
As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values.
I’ll end with this—if indefinite delay turns out to increase our x-risk (or if we just can’t do it for sociopolitical reasons), then I truly envy those who were born before 1920—they never had to see the storm that’s coming.