The main points are that value alignment will be way more necessary for ordinary people to survive, no matter the institiutions adopted, that the world hasn’t yet weighed in that much on AI safety and plausibly never will, but we do need to prepare for a future in which AI safety may become mainstream, that Bayesianism is fine actually, and many more points in the full comment.
Crossposting this comment from LW, because I think there is some value here:
https://www.lesswrong.com/posts/6YxdpGjfHyrZb7F2G/third-wave-ai-safety-needs-sociopolitical-thinking#HBaqJymPxWLsuedpF
The main points are that value alignment will be way more necessary for ordinary people to survive, no matter the institiutions adopted, that the world hasn’t yet weighed in that much on AI safety and plausibly never will, but we do need to prepare for a future in which AI safety may become mainstream, that Bayesianism is fine actually, and many more points in the full comment.