Thank you for sharing your perspective. I appreciate it.
I definitely misunderstood what you were saying about global totalitarianism. Thank you for clarifying. I will say I have a hard time guessing how global totalitarianism might result from a near-miss or a sub-collapse disaster involving one of the typical global catastrophe scenarios, like nuclear war, pandemics (natural or bioengineered), asteroids, or extreme climate change. (Maybe authoritarianism or totalitarianism within some specific countries, sure, but a totalitarian world government?)
To be clear, are you saying that your own paper about storing data on the Moon is also a Plan F? I was curious what you thought of the Arch Mission Foundation because your paper proposes putting data on the Moon and someone has actually done that! They didn’t execute your specific idea, of course, but I wondered how you thought their idea stacked up against yours.
I definitely agree that putting data on the Moon should be at best a Plan F, our sixth priority, if not even lower! I think the chances of data on the Moon ever being useful are slim, and I don’t want the world to ever get into a scenario where it would be useful!
I think value lock-in does not depend on the MIRI worldview—here’s a relevant article.
Ah, I agree, this is correct, but I meant the idea of value lock-in is inherited from a very specific way of thinking about AGI primarily popularized by MIRI and its employees but also popularized by people like Nick Bostrom (e.g. in his 2014 book Superintelligence). Thinking value lock-in is a serious and likely concern with regard to AGI does not require you to subscribe to MIRI’s specific worldview or Bostrom’s on AGI. So, you’re right in that respect.
But I think if recent history had played a little differently and ideas about AGI had been formed imagining that human brain emulation would be the underlying technological paradigm, or that it would be deep learning and deep reinforcement learning, then the idea of value lock-in would not be as popular in current discussions of AGI as it is. I think the popularity of the value lock-in idea is largely an artifact of the historical coincidence that many philosophical ideas about AGI got formed while symbolic AI or GOFAI was the paradigm people were imagining would produce AGI.
The same could be said for broader ideas about AI alignment.
Thank you for sharing your perspective. I appreciate it.
I definitely misunderstood what you were saying about global totalitarianism. Thank you for clarifying. I will say I have a hard time guessing how global totalitarianism might result from a near-miss or a sub-collapse disaster involving one of the typical global catastrophe scenarios, like nuclear war, pandemics (natural or bioengineered), asteroids, or extreme climate change. (Maybe authoritarianism or totalitarianism within some specific countries, sure, but a totalitarian world government?)
To be clear, are you saying that your own paper about storing data on the Moon is also a Plan F? I was curious what you thought of the Arch Mission Foundation because your paper proposes putting data on the Moon and someone has actually done that! They didn’t execute your specific idea, of course, but I wondered how you thought their idea stacked up against yours.
I definitely agree that putting data on the Moon should be at best a Plan F, our sixth priority, if not even lower! I think the chances of data on the Moon ever being useful are slim, and I don’t want the world to ever get into a scenario where it would be useful!
Ah, I agree, this is correct, but I meant the idea of value lock-in is inherited from a very specific way of thinking about AGI primarily popularized by MIRI and its employees but also popularized by people like Nick Bostrom (e.g. in his 2014 book Superintelligence). Thinking value lock-in is a serious and likely concern with regard to AGI does not require you to subscribe to MIRI’s specific worldview or Bostrom’s on AGI. So, you’re right in that respect.
But I think if recent history had played a little differently and ideas about AGI had been formed imagining that human brain emulation would be the underlying technological paradigm, or that it would be deep learning and deep reinforcement learning, then the idea of value lock-in would not be as popular in current discussions of AGI as it is. I think the popularity of the value lock-in idea is largely an artifact of the historical coincidence that many philosophical ideas about AGI got formed while symbolic AI or GOFAI was the paradigm people were imagining would produce AGI.
The same could be said for broader ideas about AI alignment.