I’m really curious which description of
EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?
I can imagine there might be very different results depending on the framing.
My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.
I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?
+1. There’s a big gap, I’d guess, between “your dollar goes further overseas” and “we must reduce risk from runaway AI”.
while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in
As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don’t think we’re covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to “who likes extreme altruism”, “who likes AI safety”, etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of the movement. There could be people who would be a perfect fit for ideas of EA (however defined: x-risk, donating 50%, etc), but still might not like the current community. How to actually deal with that finding would be a different question, but it seems like that would be worth knowing.
Thanks both, great point. We focused the description in this study on the effective giving and career choice aspects of EA, and the results may well be different depending on the framing—it’d be worth replicating with something like x-risk. Here’s the full description (built from ea.org):
“What is Effective Altruism? Thinking carefully about how to do good. Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.
Most of us want to make a difference. We see suffering, injustice and death, and are moved to do something about them. But working out what that ‘something’ is, let alone doing it, is a difficult problem.
Which cause should you support if you really want to make a difference? What career choices will help you make a significant contribution? Which charities will use your donation effectively? If you don’t choose well, you risk wasting your time and money. But if you choose wisely, you have a tremendous chance to improve the world.
Effective altruism considers tradeoffs like the following: Suppose we want to fight blindness. For $40,000 we can provide guide dogs to blind people in the US. Or for $20 per patient, we can pay for surgery reversing the effects of trachoma in Africa (a disease which causes blindness). If people have equal moral value, then the second option is more than 2,000 times better than the first.”
I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?
I can imagine there might be very different results depending on the framing.
My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.
+1. There’s a big gap, I’d guess, between “your dollar goes further overseas” and “we must reduce risk from runaway AI”.
Serious question: Could we start a new one?
As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don’t think we’re covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to “who likes extreme altruism”, “who likes AI safety”, etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of the movement. There could be people who would be a perfect fit for ideas of EA (however defined: x-risk, donating 50%, etc), but still might not like the current community. How to actually deal with that finding would be a different question, but it seems like that would be worth knowing.
Thanks both, great point. We focused the description in this study on the effective giving and career choice aspects of EA, and the results may well be different depending on the framing—it’d be worth replicating with something like x-risk. Here’s the full description (built from ea.org):
“What is Effective Altruism? Thinking carefully about how to do good. Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.
Most of us want to make a difference. We see suffering, injustice and death, and are moved to do something about them. But working out what that ‘something’ is, let alone doing it, is a difficult problem. Which cause should you support if you really want to make a difference? What career choices will help you make a significant contribution? Which charities will use your donation effectively? If you don’t choose well, you risk wasting your time and money. But if you choose wisely, you have a tremendous chance to improve the world.
Effective altruism considers tradeoffs like the following: Suppose we want to fight blindness. For $40,000 we can provide guide dogs to blind people in the US. Or for $20 per patient, we can pay for surgery reversing the effects of trachoma in Africa (a disease which causes blindness). If people have equal moral value, then the second option is more than 2,000 times better than the first.”