For a lot of people, working on capabilities is the best way to gain skills before working on safety. And if, across your career, you spending half your effort on each goal, that is probably much better than not working on AI at all.
It would be nice to know more about how many EAs are getting into this plan and how many end up working in safety. I don’t have the sense that most of them get to the safety half. I also think it is reasonable to believe that no amount of safety research can prevent armageddon, because the outcome of the research may just be “this is not safe”, as EY seems to report, and have no impact (the capabilities researchers don’t care, or, the fact that we aren’t safe yet means they need to keep working in capabilities so that they can help with the safety problem).
For a lot of people, working on capabilities is the best way to gain skills before working on safety. And if, across your career, you spending half your effort on each goal, that is probably much better than not working on AI at all.
It would be nice to know more about how many EAs are getting into this plan and how many end up working in safety. I don’t have the sense that most of them get to the safety half. I also think it is reasonable to believe that no amount of safety research can prevent armageddon, because the outcome of the research may just be “this is not safe”, as EY seems to report, and have no impact (the capabilities researchers don’t care, or, the fact that we aren’t safe yet means they need to keep working in capabilities so that they can help with the safety problem).