The object-level reasons are probably the most interesting and fruitful, but for a complete understanding of how the differences might arise, it’s probably also valuable to consider:
sociological reasons
meta-level incentive reasons
selection effects
An interesting exercise could be to go through the categories and elucidate 1-3 reasons in each category for why AI alignment people might believe X and cause prio people might believe not X.
The object-level reasons are probably the most interesting and fruitful, but for a complete understanding of how the differences might arise, it’s probably also valuable to consider:
sociological reasons
meta-level incentive reasons
selection effects
An interesting exercise could be to go through the categories and elucidate 1-3 reasons in each category for why AI alignment people might believe X and cause prio people might believe not X.