As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don’t think we’re covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to “who likes extreme altruism”, “who likes AI safety”, etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of the movement. There could be people who would be a perfect fit for ideas of EA (however defined: x-risk, donating 50%, etc), but still might not like the current community. How to actually deal with that finding would be a different question, but it seems like that would be worth knowing.
As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don’t think we’re covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to “who likes extreme altruism”, “who likes AI safety”, etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of the movement. There could be people who would be a perfect fit for ideas of EA (however defined: x-risk, donating 50%, etc), but still might not like the current community. How to actually deal with that finding would be a different question, but it seems like that would be worth knowing.