Thanks for taking a balanced view, but I would have liked to see more discussion of the replaceability argument which really is pivotal here.
You say that whoever is hired into a progress-accelerating role, even if they are safety-conscious, will likely be most effective in the role and so will accelerate progress more than an alternative candidate. This is fair but may not be the whole story. Could the fact that they are safety-conscious mean they can develop the AI in a safer way than the alternative candidate? Maybe they would be inclined to communicate and cooperate more with the safety teams than an alternative candidate. Maybe they would be more likely to raise concerns to leadership etc.
If these latter effects dominate it could be worth suggesting that people in the EA community apply even for progress-accelerating roles, and it could be more important for them to take roles at less reliable places like OpenAI than slightly more reliable like Anthropic.
Thanks for taking a balanced view, but I would have liked to see more discussion of the replaceability argument which really is pivotal here.
You say that whoever is hired into a progress-accelerating role, even if they are safety-conscious, will likely be most effective in the role and so will accelerate progress more than an alternative candidate. This is fair but may not be the whole story. Could the fact that they are safety-conscious mean they can develop the AI in a safer way than the alternative candidate? Maybe they would be inclined to communicate and cooperate more with the safety teams than an alternative candidate. Maybe they would be more likely to raise concerns to leadership etc.
If these latter effects dominate it could be worth suggesting that people in the EA community apply even for progress-accelerating roles, and it could be more important for them to take roles at less reliable places like OpenAI than slightly more reliable like Anthropic.