“Liberty: Prioritizes individual freedom and autonomy, resisting excessive governmental control and supporting the right to personal wealth. Lower scores may be more accepting of government intervention, while higher scores champion personal freedom and autonomy...”
“alignment researchers are found to score significantly higher in liberty (U=16035, p≈0)”
This partly explains why so much of the alignment community doesn’t support PauseAI!
Alignment researchers from our sample broadly do support pausing or dramatically slowing AI development—
We find in training a classifier to predict alignment researchers’ answer to the above question (using their personality, values, moral foundations data as features), the single most important (ie, predictively relevant) feature is indeed their liberty moral foundation score. So it does seem like liberty generally mediates researchers’ willingness to support pausing AI development.
Hi Cameron :) thanks for replying! I am concerned that the question is double-barreled. Are you? (i.e. a survey question that actually asks about two different things, but only allows for one answer. This can be confusing for respondents and lead to inaccurate data).
Hi Yanni, this is definitely an important consideration in general. Our goal was basically to probe whether alignment researchers think the status quo of rapid capabilities progress is acceptable/appropriate/safe or not. Definitely agree that for those interested, eg, in understanding whether alignment researchers support a full-blown pause OR just a dramatic slowing of capabilities progress, this question would be insufficiently vague. But for our purposes, having the ‘or’ statement doesn’t really change what we were fundamentally attempting to probe.
“Liberty: Prioritizes individual freedom and autonomy, resisting excessive governmental control and supporting the right to personal wealth. Lower scores may be more accepting of government intervention, while higher scores champion personal freedom and autonomy...”
“alignment researchers are found to score significantly higher in liberty (U=16035, p≈0)”
This partly explains why so much of the alignment community doesn’t support PauseAI!
Good find! Two additional points of context:
Alignment researchers from our sample broadly do support pausing or dramatically slowing AI development—
We find in training a classifier to predict alignment researchers’ answer to the above question (using their personality, values, moral foundations data as features), the single most important (ie, predictively relevant) feature is indeed their liberty moral foundation score. So it does seem like liberty generally mediates researchers’ willingness to support pausing AI development.
Hi Cameron :) thanks for replying! I am concerned that the question is double-barreled. Are you? (i.e. a survey question that actually asks about two different things, but only allows for one answer. This can be confusing for respondents and lead to inaccurate data).
Hi Yanni, this is definitely an important consideration in general. Our goal was basically to probe whether alignment researchers think the status quo of rapid capabilities progress is acceptable/appropriate/safe or not. Definitely agree that for those interested, eg, in understanding whether alignment researchers support a full-blown pause OR just a dramatic slowing of capabilities progress, this question would be insufficiently vague. But for our purposes, having the ‘or’ statement doesn’t really change what we were fundamentally attempting to probe.