Level of involvement/familiarity: I work at an EA or EA-associated organization. Please post my five points separately so that people can discuss them without tangling the discussion threads.
In the absence of proper feedback loops, we will feel like we are succeeding while we are in fact stagnating and/or missing the mark. Wary of using this as a fully general critique, some of the proxies we use for success seem to be only loosely tracking what we actually care about. (See Goodhart’s Law.)
For instance, community growth is used as a proxy for success where it might, in fact, be an indicator of concept and community dilution. Engagement on OMfCT, while ‘engaging the EA community,’ seems to supplant real, critical engagement. (I’m really uncertain of this claim.) With the exception of a few people, often those from the early days of EA, there’s little generation of new content, and more meta-fixation on organizations and community critiques.
Tracking quality and novel content is really hard, but it seems far more likely to move EA into the public sphere, academia, etc. than boosting pretty numbers on a graph. We’re going to miss a lot of levers for influence if we keep resting on our intellectual laurels.
I’d like to see more essay contests and social rewards for writing, rather than the only response to such writing being blunt critiques of the content itself. I’d also like to see the development of more sophisticated metrics to gauge community development, rather than treating more quantifiable, scalable metrics as our only rigorous option.
There seems to be a sense in effective altruism that the existence of one organization working on a given problem means that the problem is now properly addressed. The thought appears to be: ‘(Organization) exists, so the space of evaluating (organization function) is filled and the problem is therefore taken care of.’
Organizations are just a few people working on a problem together, with some slightly better infrastructure, stable funding, and time. The problems we’re working on are too big for a handful of people to fix, and the fact that a handful of people are working in a given space doesn’t suggest that others shouldn’t work on it too. I’d like to see more recognition of the conceptual distinction between the existence of an organization with a certain mission, and what exactly is and is not being done to accomplish that mission. We could use more volunteers/partners to EA organizations, or even separate organizations addressing the same issue(s) using a different epistemology.
To encourage this, I’d love to see more support for individuals doing great projects who are better suited to the flexibility of doing work independently of any organization, or who otherwise don’t fit a hole in an organization.
I’m generally worried about how little most people actually seem to change their minds, despite being in a community that nominally holds the pursuit of truth in such high esteem.
Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.
There are also really strong founder effects in regional EA groups. That is, locals of one area generally seem to converge on one or two causes or approaches being best. Moreover, they often converge not because they moved there to be with those people, but because they ‘became’ EAs there.
Excepting a handful of people who have switched cause areas, it seems like EA as a brand serves more to justify what one is already doing and optimize within one’s comfort zone in it, as opposed to actually changing minds.
To fix this, I’d want to lower the barriers to changing one’s mind by, e.g., translating the arguments for one cause to the culture of a group often associated with another cause, and encouraging thought leaders and community leaders to be more open about the ways in which they are uncertain about their views so that others are comfortable following suit.
This is a great point. In addition to considering “how can we make it easier to get people to change their minds,” I think we should also be asking, “is there good that can still be accomplished even when people are not willing to change their minds?” Sometimes social engineering is most effective when it works around people’s biases and weaknesses rather than trying to attack them head on.
I agree that this is a problem, but I don’t agree with the causal model and so I don’t agree with the solution.
Looking at the EA Survey, the best determinant of what cause a person believes to be important is the one that they thought was important before they found EA and considered cause prioritization.
I’d guess that the majority of the people who take the EA Survey are fairly new to EA and haven’t encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality “tips and tricks” to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that’s pretty typical.
TL;DR I don’t think there’s good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.
Moreover, they often converge not because they moved there to be with those people, but because they ‘became’ EAs there.
I’d propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments—we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)
In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you’ll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they’ve barely heard of other areas.
Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.
Perhaps one implication of this is it’s better to target movement growing efforts at students (particularly undergrads), since they’re less likely to have already made up their minds?
The high-value people from the early days of effective altruism are disengaging, and the high-value people who might join are not engaging. There are people who were once quite crucial to the development of EA ‘fundamentals’ who have since parted ways, and have done so because they are disenchanted with the direction in which they see us heading.
More concretely, I’ve heard many reports to the effect: ‘EA doesn’t seem to be the place where the most novel/talented/influential people are gravitating, because there aren’t community quality controls.’ While inclusivity is really important in most circumstances, it has a downside risk here that we seem to be experiencing. I believe we are likely to lose the interest and enthusiasm of those who are most valuable to our pursuits, because they don’t feel like they are around peers, and/or because they don’t feel that they are likely to be socially rewarded for their extreme dedication or thoughtfulness.
I think that the community’s dip in quality comes in part from the fact that you can get most of the community benefits without being a community benefactor—e.g. invitations to parties and likes on Facebook. At the same time, one incurs social costs for being more tireless and selfless (e.g., skipping parties to work), for being more willing to express controversial views (e.g., views that conflict with clan norms), or for being more willing to do important but low-status jobs (e.g., office manager, assistant). There’s a lot that we’d need to do in order to change this, but as a first step we should be more attentive to the fact that this is happening.
There’s a lot of mistrust between the different ‘clans’ in the EA community, and a lot of dismissal of the thinking of other clans. As someone who is relatively in touch with all of them, I gauge the mistrust to be overhyped and the dismissal to be uncalibrated.
If we want to hedge against groupthink, we need to try to reconcile our views with those of others who share our goals. At present, we seem to instead be making enemies of those few who are most able and willing to be our allies.
Yes, this is hard. There are lots of inferential gaps, years of contentious history, empirical unknowns, cultural differences between groups following different thought leaders..… But this is important.
Anonymous #32:
Anonymous #32(c):
Anonymous #32(d):
Anonymous #32(e):
This is a great point. In addition to considering “how can we make it easier to get people to change their minds,” I think we should also be asking, “is there good that can still be accomplished even when people are not willing to change their minds?” Sometimes social engineering is most effective when it works around people’s biases and weaknesses rather than trying to attack them head on.
I agree that this is a problem, but I don’t agree with the causal model and so I don’t agree with the solution.
I’d guess that the majority of the people who take the EA Survey are fairly new to EA and haven’t encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality “tips and tricks” to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that’s pretty typical.
TL;DR I don’t think there’s good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.
I’d propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments—we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)
In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you’ll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they’ve barely heard of other areas.
Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.
Perhaps one implication of this is it’s better to target movement growing efforts at students (particularly undergrads), since they’re less likely to have already made up their minds?
Anonymous #32(b):
What communities are the most novel/talented/influential people gravitating towards? How are they better?
I upvoted this mostly because it was new information to me, but I have the same questions as Richard.
Anonymous #32(a):