I’m happy to see more debate of how much we should prioritise AI safety. We intend to debate some of these issues on the podcast, and have already started recording with Ben Garfinkel.
However, I think you’re misrepresenting how much the key idea series recommends working on AI safety. We feature a range of other problem areas prominently and I don’t think many readers will come away thinking that our position is that “EA should focus on AI alone”.
We list 9 priority career paths, of which only 2 are directly related to AI safety, recommend a variety of other options, and say that there are many good options we don’t list.
Elsewhere on the page, we also discuss the importance of personal fit and coordination, which can make it better for an individual to enter different problem areas from those we most highlight.
The most relevant section is short, so I’d encourage readers of this thread to read the section and make up their own mind.
Hi EarlyVelcro,
I’m happy to see more debate of how much we should prioritise AI safety. We intend to debate some of these issues on the podcast, and have already started recording with Ben Garfinkel.
However, I think you’re misrepresenting how much the key idea series recommends working on AI safety. We feature a range of other problem areas prominently and I don’t think many readers will come away thinking that our position is that “EA should focus on AI alone”.
We list 9 priority career paths, of which only 2 are directly related to AI safety, recommend a variety of other options, and say that there are many good options we don’t list.
Elsewhere on the page, we also discuss the importance of personal fit and coordination, which can make it better for an individual to enter different problem areas from those we most highlight.
The most relevant section is short, so I’d encourage readers of this thread to read the section and make up their own mind.
Also see this clarification of how much we focus on different causes.