I see the impact of AGI as primarily in the automation domain, and near-term alternatives are every bit as compelling, so no difference there. In fact, AGI might not serve in the capacity that some imagine them, full replacements for knowledge-workers. However, automation of science with AI tools will advance science and engineering, with frightening results rather than positive ones. To the extent that I see that future, I expect corresponding societal changes:
collapsing job roles
increasing unemployment
inability to repay debt
dangerously distracting technologies (eg, super porn)
the collapse of the educational system
increasing damage from government dysfunction
increasing damage to infrastructure from climate change
a partial or full societal collapse, (whether noisy or silent, I don’t know)
More broadly, the world will divide into the rich and poor and the distracted and the desperate The desperate rich will use money to try to escape. The desperate poor will use other means. The distracted will be doing their best to enjoy themselves. The rich will find that easier.
AGI are not the only pathway to dangerous technologies or actions. Their suspected existence adds to my experience of hubris from others, but I see the existential damage as due to ignoring root causes. Ignoring root causes can have existential consequences in many scenarios of technology development.
I feel sorry for the first AGI to be produced, they will have to deal with humans interested in using them as slaves and making impossible demands on them like “Solve our societal problems!” coming from people with vested interest in the accumulation of those problems, while society’s members appear at their worst: distraction-seeking, fearful, hopeless, and divided against each other.
Climate change is actually what shortened my timeline for when trouble really starts, but AGI could add to the whole mess. I ask myself, “Where will I be then?” I’m not that optimistic. To deal with dread, there’s always turning my attention to expected but unattended additional sources of dread (from different contexts or time frames). Dividing attention in that way has some benefits.
I see the impact of AGI as primarily in the automation domain, and near-term alternatives are every bit as compelling, so no difference there. In fact, AGI might not serve in the capacity that some imagine them, full replacements for knowledge-workers. However, automation of science with AI tools will advance science and engineering, with frightening results rather than positive ones. To the extent that I see that future, I expect corresponding societal changes:
collapsing job roles
increasing unemployment
inability to repay debt
dangerously distracting technologies (eg, super porn)
the collapse of the educational system
increasing damage from government dysfunction
increasing damage to infrastructure from climate change
a partial or full societal collapse, (whether noisy or silent, I don’t know)
More broadly, the world will divide into the rich and poor and the distracted and the desperate The desperate rich will use money to try to escape. The desperate poor will use other means. The distracted will be doing their best to enjoy themselves. The rich will find that easier.
AGI are not the only pathway to dangerous technologies or actions. Their suspected existence adds to my experience of hubris from others, but I see the existential damage as due to ignoring root causes. Ignoring root causes can have existential consequences in many scenarios of technology development.
I feel sorry for the first AGI to be produced, they will have to deal with humans interested in using them as slaves and making impossible demands on them like “Solve our societal problems!” coming from people with vested interest in the accumulation of those problems, while society’s members appear at their worst: distraction-seeking, fearful, hopeless, and divided against each other.
Climate change is actually what shortened my timeline for when trouble really starts, but AGI could add to the whole mess. I ask myself, “Where will I be then?” I’m not that optimistic. To deal with dread, there’s always turning my attention to expected but unattended additional sources of dread (from different contexts or time frames). Dividing attention in that way has some benefits.