(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there’s enough time left for the balance of reason to win out, and b) focus on technical projects that don’t involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).
Too soon to tell I think. Probably better to wait for the dust to settle.
(Not that it matters much, but my own guess is that many of the us who are x-risk focused should a) be as cooperative and honest with the public about our concerns with superhuman AI systems as possible, and hope that there’s enough time left for the balance of reason to win out, and b) focus on technical projects that don’t involve much internal politics.
Working on cause areas that are less fraught than x-risk also seems like a comparatively good idea, now.
Organizational politics is both corrupting and not really our (or at least my) strong suit, so best to leave it to others).