In other words, if the disagreement was “bottom-up”, then you’d expect that at least some people who are optimistic about misalignment risk would be pessimistic about other kinds of AI risk, such as what I call “human safety problems” (see examples here and here) but in fact I don’t seem to see anyone whose position is something like, “AI alignment will be easy or likely solved by default, therefore we should focus our efforts on these other kinds of AI-related x-risks that are much more worrying.”
FWIW I know some people who explicitly think this. And I think there are also a bunch of people who think something like “the alignment problem will probably be pretty technically easy, so we should be focusing on the problems arising from humanity sometimes being really bad at technically easy problems”.
FWIW I know some people who explicitly think this. And I think there are also a bunch of people who think something like “the alignment problem will probably be pretty technically easy, so we should be focusing on the problems arising from humanity sometimes being really bad at technically easy problems”.
Sounds like their positions are not public, since you don’t cite anyone by name? Is there any reason for that?