I think avoiding existential risk is the most important thing. As long as we can do that and don’t have some kind of lock in, then we’ll have time to think about and optimize the value of the future.
Right. How can we prevent a misaligned AI from locking in bad values? A misaligned AI surviving takeover counts as “no extinction”, see the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ
I think avoiding existential risk is the most important thing. As long as we can do that and don’t have some kind of lock in, then we’ll have time to think about and optimize the value of the future.
Right. How can we prevent a misaligned AI from locking in bad values?
A misaligned AI surviving takeover counts as “no extinction”, see the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ