One speculation I wanted to share here regarding the significant agreeableness difference (the obvious outlier) is that our test bank did not include any reverse-scored agreeableness items like ‘critical; quarrelsome’, which is what seems to be mainly driving the difference here.
Yeh, I agree! And I think that the pattern at the item level is pretty interesting. Namely, EAs are reasonably ‘sympathetic, warm’, but a significant number are ‘critical, quarrelsome’. As I noted in the post, I think this matches common impressions of EAs (genuinely altruistic, but happy to bluntly disagree).
I wonder to what degree in an EA context, the ‘critical; quarrelsome’ item in particular might have tapped more into openness than agreeableness for some—ie, in such an ideas-forward space, I wonder if this question was read as something more like ‘critical thinking; not afraid to question ideas’ rather than what might have been read in more lay circles as something more like ‘contrarian, argumentative.’
It’s an interesting theory! Fwiw, I checked the item-level correlations and the correlations between the reverse-coded agreeableness item and the two openness items were both −0.001.
This is pure speculation, but in general, I think teasing apart EAs’ trade-off between their compassionate attitudes and their willingness to disagree intellectually would make for an interesting follow-up.
Agreed. My own speculation would be that EAs tend to place a high value on truth (in large part due to thinking it’s instrumentally necessary to do the most good). It also seems plausible to me that EA selects for people who are more willing to be disagreeable, in this sense, since it implies being willing to somewhat disagreeably say ‘some causes are much more impactful than others, and we should prioritise those based on deliberation, rather than support more popular/emotionally appealing causes’.
Thanks Cameron!
Yeh, I agree! And I think that the pattern at the item level is pretty interesting. Namely, EAs are reasonably ‘sympathetic, warm’, but a significant number are ‘critical, quarrelsome’. As I noted in the post, I think this matches common impressions of EAs (genuinely altruistic, but happy to bluntly disagree).
It’s an interesting theory! Fwiw, I checked the item-level correlations and the correlations between the reverse-coded agreeableness item and the two openness items were both −0.001.
Agreed. My own speculation would be that EAs tend to place a high value on truth (in large part due to thinking it’s instrumentally necessary to do the most good). It also seems plausible to me that EA selects for people who are more willing to be disagreeable, in this sense, since it implies being willing to somewhat disagreeably say ‘some causes are much more impactful than others, and we should prioritise those based on deliberation, rather than support more popular/emotionally appealing causes’.