[Question] Benefits/​Risks of Scott Aaronson’s Orthodox/​Reform Framing for AI Alignment

Link post

Scott Aaronson makes the argument that an Orthodox vs Reform analogy works for AI Alignment. I don’t think I’ve really heard it framed this way before. He gives his takes on the beliefs of each group and there’s a lot of discussion in the comments around that, but what’s more interesting to me is thinking about what the benefits and risks of framing it this way might be.

Perhaps the reform option gives people a way to take the arguments seriously without feeling like they are aligning themselves with something too radical. If your beliefs tend reformist, you probably like differentiating yourself from those with more radical-sounding beliefs. If your beliefs are more on the Orthadox side, maybe this is the “gateway drug” and more talent would find its way to your side. This has a little bit of the “bait-and-switch” dynamic I sometimes hear people complain about (but I do not at all endorse) with EA—that it pitches people on global health and animal welfare, but it’s all really about AI safety. As long as people really do hold beliefs along reformist lines though, I don’t see how that would be an issue.

Maybe the labels are just too sloppy, most people don’t really fit into either camp and it’s bad to pretend that they do?

Not coming up with much else, but I’d be surprised if I wasn’t missing something.

Crossposted to LessWrong (2 points, 1 comment)
No comments.