In the early days of the EA movement, when it was uncertain whether expansion was even possible, I can see “try and expand like crazy, see what happens” as being a sensible option. But now we know that expansion is very possible and there’s a large population of EA-amenable people out there. The benefits of reaching these people a bit sooner than we would otherwise seem marginal to me. So at this point I think we can afford to move the focus off of movement growth for a bit and think more deeply about exactly what we are trying to achieve. Brain dump incoming...
Does hearing about EA in its current form actually seem to increase folks’ effective altruist output? (Why are so many EAs on the survey not donating anything?)
Claiming to be “effective altruists” amounts to a sort of holier-than-thou claim. Mild unethical behavior from prominent EAs that would be basically fine in any other context could be easy tabloid fodder for journalists due to the target EA has painted on its own back. There have already been a few controversies along these lines (not gonna link to them). EA’s holier-than-thou attitude leads to unfavorable contrasts with giving to help family members etc.
EA has neglectedness as one of its areas of focus. But if a cause is neglected, it’s neglected for a reason. Sometimes it’s neglected because it’s a bad cause. Other times it’s neglected because it sounds like a bad cause but there are complicated reasons why it might actually be a good cause. EA’s failure to communicate neglectedness well leads to people saying things like “Worrying about sentient AI as the ice caps melt is like standing on the tracks as the train rushes in, worrying about being hit by lightning”. Which is just a terrible misunderstanding—EAs mostly think that global warming is a problem that needs to be addressed, but that AI risk is receiving much less funding and might be a better use of funds on the margin. The problem is that by branding itself as “effective altruism”, EA is implicitly claiming that any causes EA isn’t working on are ineffective ones. Which gets interpreted as a holier than thou attitude and riles anyone who’s working on a different cause (even if we actually agree it’s a pretty good one).
Some EAs cheered for the Dylan Matthews Vox article that prompted the tweet I linked to above, presumably because they agree with Matthews. But finding a reporter to broadcast your criticisms of the EA movement to a huge readership in order to gain leverage and give your cause more movement mindshare is a terrible defect/defect equilibrium. This is a similar conflict to the one at the heart of Tom_Davidson’s piece. EA is always going to have problems with journalists due to the neglectedness point I made above. Doing good and looking good are not the same thing and it’s not clear how to manage this tradeoff. It’s not clear how best to spend our “weirdness points”.
In line with this, you can imagine an alternate branding for EA that focuses on the weakest links in our ideological platform… for example, the “neglected causes movement” (“Neglected Causes Global”?), or the “thoughtful discussion movement”/”incremental political experimentation movement” if we decided to have a systemic change focus. (Willingness is not the limiting factor on doing effective systemic change! Unlike philanthropy, many people are extremely interested in doing systemic change. The limiting factor is people forming evidence filter bubbles and working at cross purposes to one another. As far as I can tell EA as a movement is not significantly good at avoiding filter bubbles. “Donate 10% of your time/energy towards systemic change” fails to solve the systemic problems with systemic change.) As far as I can tell, none of these alternate brandings have been explored. There hasn’t been any discussion of whether EA is better as a single EA tentpole or as multiple tentpoles, with an annual conference for neglected causes, an annual conference for avoiding filter bubbles, etc. etc.
There’s no procedure in place for resolving large-scale disagreement within the EA movement. EA is currently a “do-ocracy”, which leads to the unilateralist’s curse and other problems. In the limit of growth, we risk resolving our disagreements the same way society at large does: with shouting and/or fists. Ideally there would be some kind of group rationality best practices baked in to the EA movement. (These could even be a core branding focus.) The most important disagreement to resolve in a cooperative way may be how to spend our weirdness points.
EA is trying to be a “big tent”, but they don’t realize how difficult this is. The most diverse groups are the ones that are able to engineer their diversity: universities and corporations can hold up a degree/job carrot and choose people in order to get a representative cross section of the population. In the absence of such engineering, groups tend to get less diverse over time. Even Occupy Wall Street was disproportionately white. That’s why people who say “I like the idea of altruistic effectiveness, but not the EA movement’s implementation” don’t hang around—it’s stressful to have persistent important disagreements with everyone who’s around you. (EA’s definitional confusion might also eventually result in EA becoming a pernicious meme that’s defined itself to be great. I’m somewhat in favor of trying to make sure we really have identified the world’s most high impact causes before doing further expansion. People like Paul Cristiano have argued, convincingly IMO, that there are likely to be high-impact causes still not yet on EA movement radar. And focusing on funneling people towards a particular cause also helps address “meta trap” issues.) EA is trying to appeal to people of all ages, races, genders, political orientations, religions, etc. with very little capability for diversity engineering. It’s difficult to imagine any other group in society that’s being this ambitious.
There’s no procedure in place for resolving large-scale disagreement within the EA movement. EA is currently a “do-ocracy”, which leads to the unilateralist’s curse and other problems. In the limit of growth, we risk resolving our disagreements the same way society at large does: with shouting and/or fists. Ideally there would be some kind of group rationality best practices baked in to the EA movement. (These could even be a core branding focus.)
This seems particularly important to me. I’d love to hear more in depth thoughts of you have any. Even if not, I think it might be worth a top level post to spur discussion.
One category of solutions is the various voting and governing systems. Score voting seems pretty solid based on my limited reading. There are also more exotic proposals like futarchy/prediction markets and eigendemocracy. The downside of systems like this is once you give people a way to keep score, they sometimes become focused on increasing their score (through forming coalitions, etc.) at the expense of figuring out what’s true.
There are also “softer” solutions like trying to spread beneficial social norms. Maybe worrying about this is overkill in a group made up of do-gooders anyway, as long as moral trade is emphasized enough that people with very different value systems can still find ways to cooperate.
You’re more than welcome to think things over and write a top level post.
Why are so many EAs on the survey not donating anything?
This I can answer at least. The vast majority of the EAs who were down as giving 0, in the survey, matched at least 1 (and often more) of these criteria, i) full time student, ii) donated a large amount in the past already (even if not in that particular year), iii) pledged to give a substantial amount. The same applied for EAs merely giving ‘low’ amounts e.g. <$500. I give the figures in a comment somewhere on an earlier thread where this was raised (probably the survey thread).
Some EAs cheered for the Dylan Matthews Vox article that prompted the tweet I linked to above, presumably because they agree with Matthews. But finding a reporter to broadcast your criticisms of the EA movement to a huge readership in order to gain leverage and give your cause more movement mindshare is a terrible defect/defect equilibrium.
Matthews is an EA, and identifies as one in that piece. This wasn’t about finding someone to broadcast things, this was someone within the movement trying to shape it.
(I do agree with you that we shouldn’t be trying to enlist the greater public to take sides in internal disagreements over cause prioritization within EA.)
In the early days of the EA movement, when it was uncertain whether expansion was even possible, I can see “try and expand like crazy, see what happens” as being a sensible option. But now we know that expansion is very possible and there’s a large population of EA-amenable people out there. The benefits of reaching these people a bit sooner than we would otherwise seem marginal to me. So at this point I think we can afford to move the focus off of movement growth for a bit and think more deeply about exactly what we are trying to achieve. Brain dump incoming...
Does hearing about EA in its current form actually seem to increase folks’ effective altruist output? (Why are so many EAs on the survey not donating anything?)
Claiming to be “effective altruists” amounts to a sort of holier-than-thou claim. Mild unethical behavior from prominent EAs that would be basically fine in any other context could be easy tabloid fodder for journalists due to the target EA has painted on its own back. There have already been a few controversies along these lines (not gonna link to them). EA’s holier-than-thou attitude leads to unfavorable contrasts with giving to help family members etc.
EA has neglectedness as one of its areas of focus. But if a cause is neglected, it’s neglected for a reason. Sometimes it’s neglected because it’s a bad cause. Other times it’s neglected because it sounds like a bad cause but there are complicated reasons why it might actually be a good cause. EA’s failure to communicate neglectedness well leads to people saying things like “Worrying about sentient AI as the ice caps melt is like standing on the tracks as the train rushes in, worrying about being hit by lightning”. Which is just a terrible misunderstanding—EAs mostly think that global warming is a problem that needs to be addressed, but that AI risk is receiving much less funding and might be a better use of funds on the margin. The problem is that by branding itself as “effective altruism”, EA is implicitly claiming that any causes EA isn’t working on are ineffective ones. Which gets interpreted as a holier than thou attitude and riles anyone who’s working on a different cause (even if we actually agree it’s a pretty good one).
Some EAs cheered for the Dylan Matthews Vox article that prompted the tweet I linked to above, presumably because they agree with Matthews. But finding a reporter to broadcast your criticisms of the EA movement to a huge readership in order to gain leverage and give your cause more movement mindshare is a terrible defect/defect equilibrium. This is a similar conflict to the one at the heart of Tom_Davidson’s piece. EA is always going to have problems with journalists due to the neglectedness point I made above. Doing good and looking good are not the same thing and it’s not clear how to manage this tradeoff. It’s not clear how best to spend our “weirdness points”.
In line with this, you can imagine an alternate branding for EA that focuses on the weakest links in our ideological platform… for example, the “neglected causes movement” (“Neglected Causes Global”?), or the “thoughtful discussion movement”/”incremental political experimentation movement” if we decided to have a systemic change focus. (Willingness is not the limiting factor on doing effective systemic change! Unlike philanthropy, many people are extremely interested in doing systemic change. The limiting factor is people forming evidence filter bubbles and working at cross purposes to one another. As far as I can tell EA as a movement is not significantly good at avoiding filter bubbles. “Donate 10% of your time/energy towards systemic change” fails to solve the systemic problems with systemic change.) As far as I can tell, none of these alternate brandings have been explored. There hasn’t been any discussion of whether EA is better as a single EA tentpole or as multiple tentpoles, with an annual conference for neglected causes, an annual conference for avoiding filter bubbles, etc. etc.
There’s no procedure in place for resolving large-scale disagreement within the EA movement. EA is currently a “do-ocracy”, which leads to the unilateralist’s curse and other problems. In the limit of growth, we risk resolving our disagreements the same way society at large does: with shouting and/or fists. Ideally there would be some kind of group rationality best practices baked in to the EA movement. (These could even be a core branding focus.) The most important disagreement to resolve in a cooperative way may be how to spend our weirdness points.
EA is trying to be a “big tent”, but they don’t realize how difficult this is. The most diverse groups are the ones that are able to engineer their diversity: universities and corporations can hold up a degree/job carrot and choose people in order to get a representative cross section of the population. In the absence of such engineering, groups tend to get less diverse over time. Even Occupy Wall Street was disproportionately white. That’s why people who say “I like the idea of altruistic effectiveness, but not the EA movement’s implementation” don’t hang around—it’s stressful to have persistent important disagreements with everyone who’s around you. (EA’s definitional confusion might also eventually result in EA becoming a pernicious meme that’s defined itself to be great. I’m somewhat in favor of trying to make sure we really have identified the world’s most high impact causes before doing further expansion. People like Paul Cristiano have argued, convincingly IMO, that there are likely to be high-impact causes still not yet on EA movement radar. And focusing on funneling people towards a particular cause also helps address “meta trap” issues.) EA is trying to appeal to people of all ages, races, genders, political orientations, religions, etc. with very little capability for diversity engineering. It’s difficult to imagine any other group in society that’s being this ambitious.
Thanks.
This seems particularly important to me. I’d love to hear more in depth thoughts of you have any. Even if not, I think it might be worth a top level post to spur discussion.
One category of solutions is the various voting and governing systems. Score voting seems pretty solid based on my limited reading. There are also more exotic proposals like futarchy/prediction markets and eigendemocracy. The downside of systems like this is once you give people a way to keep score, they sometimes become focused on increasing their score (through forming coalitions, etc.) at the expense of figuring out what’s true.
There are also “softer” solutions like trying to spread beneficial social norms. Maybe worrying about this is overkill in a group made up of do-gooders anyway, as long as moral trade is emphasized enough that people with very different value systems can still find ways to cooperate.
You’re more than welcome to think things over and write a top level post.
This I can answer at least. The vast majority of the EAs who were down as giving 0, in the survey, matched at least 1 (and often more) of these criteria, i) full time student, ii) donated a large amount in the past already (even if not in that particular year), iii) pledged to give a substantial amount. The same applied for EAs merely giving ‘low’ amounts e.g. <$500. I give the figures in a comment somewhere on an earlier thread where this was raised (probably the survey thread).
Matthews is an EA, and identifies as one in that piece. This wasn’t about finding someone to broadcast things, this was someone within the movement trying to shape it.
(I do agree with you that we shouldn’t be trying to enlist the greater public to take sides in internal disagreements over cause prioritization within EA.)