The heroes who will save the world will be alignment researchers
I’m not so sure these days[1]. The heroes who save the world may well be those that get Microsoft/Open AI, Google Deepmind and Anthropic to halt their headlong Earth-threatening suicide-race toward AGI. Or those who help create a global moratorium or taboo on AGI that gives us the years (or decades) of breathing space needed for Alignment to be solved. Or those who help craft, enact, and enforce strict global limits on compute and data aimed at preventing AGI-level training runs, and those who adhere to them. Without these, I just don’t think there is time for Alignment to be solved. They are now the bottleneck through which the future flows, not Alignment research. (More.)
Also, it might even be the case that Alignment is impossible. In this case, then I guess Alignment researchers could be instrumental in providing the theoretical proofs for this, and thus keep the world safe by providing justification for the continuation of an indefinite moratorium on AGI.
I’m not so sure these days[1]. The heroes who save the world may well be those that get Microsoft/Open AI, Google Deepmind and Anthropic to halt their headlong Earth-threatening suicide-race toward AGI. Or those who help create a global moratorium or taboo on AGI that gives us the years (or decades) of breathing space needed for Alignment to be solved. Or those who help craft, enact, and enforce strict global limits on compute and data aimed at preventing AGI-level training runs, and those who adhere to them. Without these, I just don’t think there is time for Alignment to be solved. They are now the bottleneck through which the future flows, not Alignment research. (More.)
Also, it might even be the case that Alignment is impossible. In this case, then I guess Alignment researchers could be instrumental in providing the theoretical proofs for this, and thus keep the world safe by providing justification for the continuation of an indefinite moratorium on AGI.
Although to be clear, we owe historical alignment researchers a huge debt of gratitude for raising awareness of AI x-risk