I’d consider tweaking (3) to something like, “Make sure you don’t start a nuclear war based on a false alarm.” The current version has imo some serious downsides:
Association with planned disloyalty could hurt the efforts of everyone else in this community who’s trying to contribute to policy through dedicated service.
A (US) community’s reputation for planned non-retaliation might increase risk of nuclear war, because nuclear peace depends partly on countries’ perceptions that other countries are committed to retaliating against nuclear attack.
There wouldn’t be an association with planned disloyalty nor an implication for military strategy because the person would keep his/her intention secret.
I’d consider tweaking (3) to something like, “Make sure you don’t start a nuclear war based on a false alarm.” The current version has imo some serious downsides:
Association with planned disloyalty could hurt the efforts of everyone else in this community who’s trying to contribute to policy through dedicated service.
A (US) community’s reputation for planned non-retaliation might increase risk of nuclear war, because nuclear peace depends partly on countries’ perceptions that other countries are committed to retaliating against nuclear attack.
(I strongly upvoted this; I think some subset of EAs are overly naive about what channels could nuclear policy be used to reduce catastrophic risks)
There wouldn’t be an association with planned disloyalty nor an implication for military strategy because the person would keep his/her intention secret.