as power struggles become larger-scale, more people who are extremely good at winning them will become involved. That makes AI safety strategies which require power-seeking more difficult to carry out successfully.
How can we mitigate this issue? Two things come to mind. Firstly, focusing more on legitimacy [...] Secondly, prioritizing competence.
A third way to potentially mitigate the issue is to simply become more skilled at winning power struggles. Such an approach would be uncooperative, and therefore undesirable in some respects, but on balance, to me, seems worth pursuing to at least some degree.
… I realize that you, OP, have debated a very similar point before (albeit in a non-AI safety thread)—I’m not sure if you have additional thoughts to add to what you said there? (Readers can find that previous debate/exchange here.)
A third way to potentially mitigate the issue is to simply become more skilled at winning power struggles. Such an approach would be uncooperative, and therefore undesirable in some respects, but on balance, to me, seems worth pursuing to at least some degree.
… I realize that you, OP, have debated a very similar point before (albeit in a non-AI safety thread)—I’m not sure if you have additional thoughts to add to what you said there? (Readers can find that previous debate/exchange here.)