Thanks for sharing the papers. Some of those look really interesting. Iâll try to remember to look at these again when I think of it and have time to absorb them.
Wouldnât a global totalitarian government â or a global government of any kind â require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now?
I have personally never bought the idea of âvalue lock-inâ for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. For instance, the concept of âvalue lock-inâ wouldnât apply to AGI created through human brain emulation. And for other technological paradigms that could underlie AGI, are they like human brain emulation in this respect or unlike it? But this is starting to get off-topic for this post.
Wouldnât a global totalitarian government â or a global government of any kind â require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now?
Though it may be more likely for the world to go to global totalitarianism after recovery from collapse, I was referring to a scenario where there was not collapse, but the catastrophe pushed us towards totalitarianism. Some people think the world could have ended up totalitarian if World War II had gone differently.
I donât think itâs the most cost-effective way of mitigating X risk, but I guess you could think of it as plan F:
Plan A: prevent catastrophes
Plan B: contain catastrophes (e.g. not escalating nuclear war or suppressing an extreme pandemic)
Plan C: resilience despite the catastrophe getting very bad (e.g. maintaining civilization despite blocking of sun or collapse of infrastructure because of employee pandemic fear)
Plan D: recover from collapse of civilization
Plan E: refuge in case everyone else died
Plan F: resurrect civilization
I have personally never bought the idea of âvalue lock-inâ for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built.
I think value lock-in does not depend on the MIRI worldviewâhereâs a relevant article.
Thank you for sharing your perspective. I appreciate it.
I definitely misunderstood what you were saying about global totalitarianism. Thank you for clarifying. I will say I have a hard time guessing how global totalitarianism might result from a near-miss or a sub-collapse disaster involving one of the typical global catastrophe scenarios, like nuclear war, pandemics (natural or bioengineered), asteroids, or extreme climate change. (Maybe authoritarianism or totalitarianism within some specific countries, sure, but a totalitarian world government?)
To be clear, are you saying that your own paper about storing data on the Moon is also a Plan F? I was curious what you thought of the Arch Mission Foundation because your paper proposes putting data on the Moon and someone has actually done that! They didnât execute your specific idea, of course, but I wondered how you thought their idea stacked up against yours.
I definitely agree that putting data on the Moon should be at best a Plan F, our sixth priority, if not even lower! I think the chances of data on the Moon ever being useful are slim, and I donât want the world to ever get into a scenario where it would be useful!
I think value lock-in does not depend on the MIRI worldviewâhereâs a relevant article.
Ah, I agree, this is correct, but I meant the idea of value lock-in is inherited from a very specific way of thinking about AGI primarily popularized by MIRI and its employees but also popularized by people like Nick Bostrom (e.g. in his 2014 book Superintelligence). Thinking value lock-in is a serious and likely concern with regard to AGI does not require you to subscribe to MIRIâs specific worldview or Bostromâs on AGI. So, youâre right in that respect.
But I think if recent history had played a little differently and ideas about AGI had been formed imagining that human brain emulation would be the underlying technological paradigm, or that it would be deep learning and deep reinforcement learning, then the idea of value lock-in would not be as popular in current discussions of AGI as it is. I think the popularity of the value lock-in idea is largely an artifact of the historical coincidence that many philosophical ideas about AGI got formed while symbolic AI or GOFAI was the paradigm people were imagining would produce AGI.
The same could be said for broader ideas about AI alignment.
Thanks for sharing the papers. Some of those look really interesting. Iâll try to remember to look at these again when I think of it and have time to absorb them.
What do you think of the Arch Mission Foundationâs Nanofiche archive on the Moon?
Wouldnât a global totalitarian government â or a global government of any kind â require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now?
I have personally never bought the idea of âvalue lock-inâ for AGI. It seems like an idea inherited from the MIRI worldview, which is a very specific view on AGI with some very specific and contestable assumptions of what AGI will be like and how it will be built. For instance, the concept of âvalue lock-inâ wouldnât apply to AGI created through human brain emulation. And for other technological paradigms that could underlie AGI, are they like human brain emulation in this respect or unlike it? But this is starting to get off-topic for this post.
Though it may be more likely for the world to go to global totalitarianism after recovery from collapse, I was referring to a scenario where there was not collapse, but the catastrophe pushed us towards totalitarianism. Some people think the world could have ended up totalitarian if World War II had gone differently.
I donât think itâs the most cost-effective way of mitigating X risk, but I guess you could think of it as plan F:
Plan A: prevent catastrophes
Plan B: contain catastrophes (e.g. not escalating nuclear war or suppressing an extreme pandemic)
Plan C: resilience despite the catastrophe getting very bad (e.g. maintaining civilization despite blocking of sun or collapse of infrastructure because of employee pandemic fear)
Plan D: recover from collapse of civilization
Plan E: refuge in case everyone else died
Plan F: resurrect civilization
I think value lock-in does not depend on the MIRI worldviewâhereâs a relevant article.
Thank you for sharing your perspective. I appreciate it.
I definitely misunderstood what you were saying about global totalitarianism. Thank you for clarifying. I will say I have a hard time guessing how global totalitarianism might result from a near-miss or a sub-collapse disaster involving one of the typical global catastrophe scenarios, like nuclear war, pandemics (natural or bioengineered), asteroids, or extreme climate change. (Maybe authoritarianism or totalitarianism within some specific countries, sure, but a totalitarian world government?)
To be clear, are you saying that your own paper about storing data on the Moon is also a Plan F? I was curious what you thought of the Arch Mission Foundation because your paper proposes putting data on the Moon and someone has actually done that! They didnât execute your specific idea, of course, but I wondered how you thought their idea stacked up against yours.
I definitely agree that putting data on the Moon should be at best a Plan F, our sixth priority, if not even lower! I think the chances of data on the Moon ever being useful are slim, and I donât want the world to ever get into a scenario where it would be useful!
Ah, I agree, this is correct, but I meant the idea of value lock-in is inherited from a very specific way of thinking about AGI primarily popularized by MIRI and its employees but also popularized by people like Nick Bostrom (e.g. in his 2014 book Superintelligence). Thinking value lock-in is a serious and likely concern with regard to AGI does not require you to subscribe to MIRIâs specific worldview or Bostromâs on AGI. So, youâre right in that respect.
But I think if recent history had played a little differently and ideas about AGI had been formed imagining that human brain emulation would be the underlying technological paradigm, or that it would be deep learning and deep reinforcement learning, then the idea of value lock-in would not be as popular in current discussions of AGI as it is. I think the popularity of the value lock-in idea is largely an artifact of the historical coincidence that many philosophical ideas about AGI got formed while symbolic AI or GOFAI was the paradigm people were imagining would produce AGI.
The same could be said for broader ideas about AI alignment.