why aren’t more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn’t there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I’m not convinced such a path exists)?
I think these are far more relevant questions than the theoretical long-termist question you ask.
People can be in favor of indefinite AI pauses without wanting permanent stagnation. They may be willing to trade off reduced progress for the next 100+ years to reduce the risks. The relevant considerations seem to be:
how much extra suffering does indefinite delay carry?
how much x-risk from non-AI causes does indefinite delay carry?
does pursuing indefinite delay reduce x-risk or does it increase it? Is it feasible?
I expect you’d find that the main disagreement in EA will be at the latter question, for reasons such as compute overhang, differentially delaying the most responsible actors, and “China”.
I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?”
As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?”
More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc.
As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values.
I’ll end with this—if indefinite delay turns out to increase our x-risk (or if we just can’t do it for sociopolitical reasons), then I truly envy those who were born before 1920—they never had to see the storm that’s coming.
I think these are far more relevant questions than the theoretical long-termist question you ask.
People can be in favor of indefinite AI pauses without wanting permanent stagnation. They may be willing to trade off reduced progress for the next 100+ years to reduce the risks. The relevant considerations seem to be:
how much extra suffering does indefinite delay carry?
how much x-risk from non-AI causes does indefinite delay carry?
does pursuing indefinite delay reduce x-risk or does it increase it? Is it feasible?
I expect you’d find that the main disagreement in EA will be at the latter question, for reasons such as compute overhang, differentially delaying the most responsible actors, and “China”.
I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?”
As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?”
More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc.
As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values.
I’ll end with this—if indefinite delay turns out to increase our x-risk (or if we just can’t do it for sociopolitical reasons), then I truly envy those who were born before 1920—they never had to see the storm that’s coming.