Malo—bravo on this pivot in MIRI’s strategy and priorities. Honestly it’s what I’ve hoped MIRI would do for a while. It seems rational, timely, humble, and very useful! I’m excited about this.
I agree that we’re very unlikely to solve ‘technical alignment’ challenges fast enough to keep AI safe, given the breakneck rate of progress in AI capabilities. If we can’t speed up alignment work, we have to slow down capabilities work.
I guess the big organizational challenge for MIRI will be whether its current staff, who may have been recruited largely for their technical AI knowledge, general rationality, and optimism about solving alignment, can pivot towards this more policy-focused and outreach-focused agenda—which may require quite different skill sets.
Let me know if there’s anything I can do to help, and best of luck with this new strategy!
Malo—bravo on this pivot in MIRI’s strategy and priorities. Honestly it’s what I’ve hoped MIRI would do for a while. It seems rational, timely, humble, and very useful! I’m excited about this.
I agree that we’re very unlikely to solve ‘technical alignment’ challenges fast enough to keep AI safe, given the breakneck rate of progress in AI capabilities. If we can’t speed up alignment work, we have to slow down capabilities work.
I guess the big organizational challenge for MIRI will be whether its current staff, who may have been recruited largely for their technical AI knowledge, general rationality, and optimism about solving alignment, can pivot towards this more policy-focused and outreach-focused agenda—which may require quite different skill sets.
Let me know if there’s anything I can do to help, and best of luck with this new strategy!