I’m generally worried about how little most people actually seem to change their minds, despite being in a community that nominally holds the pursuit of truth in such high esteem.
Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.
There are also really strong founder effects in regional EA groups. That is, locals of one area generally seem to converge on one or two causes or approaches being best. Moreover, they often converge not because they moved there to be with those people, but because they ‘became’ EAs there.
Excepting a handful of people who have switched cause areas, it seems like EA as a brand serves more to justify what one is already doing and optimize within one’s comfort zone in it, as opposed to actually changing minds.
To fix this, I’d want to lower the barriers to changing one’s mind by, e.g., translating the arguments for one cause to the culture of a group often associated with another cause, and encouraging thought leaders and community leaders to be more open about the ways in which they are uncertain about their views so that others are comfortable following suit.
This is a great point. In addition to considering “how can we make it easier to get people to change their minds,” I think we should also be asking, “is there good that can still be accomplished even when people are not willing to change their minds?” Sometimes social engineering is most effective when it works around people’s biases and weaknesses rather than trying to attack them head on.
I agree that this is a problem, but I don’t agree with the causal model and so I don’t agree with the solution.
Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.
I’d guess that the majority of the people who take the EA Survey are fairly new to EA and haven’t encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality “tips and tricks” to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that’s pretty typical.
TL;DR I don’t think there’s good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.
Moreover, they often converge not because they moved there to be with those people, but because they ‘became’ EAs there.
I’d propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments—we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)
In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you’ll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they’ve barely heard of other areas.
Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.
Perhaps one implication of this is it’s better to target movement growing efforts at students (particularly undergrads), since they’re less likely to have already made up their minds?
Anonymous #32(e):
This is a great point. In addition to considering “how can we make it easier to get people to change their minds,” I think we should also be asking, “is there good that can still be accomplished even when people are not willing to change their minds?” Sometimes social engineering is most effective when it works around people’s biases and weaknesses rather than trying to attack them head on.
I agree that this is a problem, but I don’t agree with the causal model and so I don’t agree with the solution.
I’d guess that the majority of the people who take the EA Survey are fairly new to EA and haven’t encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality “tips and tricks” to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that’s pretty typical.
TL;DR I don’t think there’s good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.
I’d propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments—we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)
In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you’ll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they’ve barely heard of other areas.
Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.
Perhaps one implication of this is it’s better to target movement growing efforts at students (particularly undergrads), since they’re less likely to have already made up their minds?