2. Just throwing it out there: Should EA embrace being apolitical? As in, possible official core virtue of the EA movement proper: Effective Altruism doesn’t take sides on controversial political issues, though of course individual EAs are free to.
Robin Hanson’s “pulling the rope sideways” analogy has always struck me: In the great society tug-of-war debates on abortion, immigration, and taxes, it’s rarely effective to pick a side and pull. First, you’re one of many, facing plenty of opposition, making your goal difficult to accomplish. But second, if half the country thinks your goal is bad, it very well might be. On the other hand, pushing sideways is easy: nobody’s going to filibuster to prevent you from handing out malaria nets—everybody thinks it’s a good idea.
(This doesn’t mean not involving yourself in politics. 80k writes on improving political decision making or becoming a congressional staffer—they’re both nonpartisan ways to do good in politics.)
If EA were officially apolitical like this, we would benefit by Hanson’s logic: we can more easily achieve our goals without enemies, and we’re more likely to be right. But we’d could also gain credibility and influence in the long run by refusing to enter the political fray.
I think part of EA’s success is because it’s an identity label, almost a third party, an ingroup for people who dislike the Red/Blue identity divide. I’d say most EAs (and certainly the EAs that do the most good) identify much more strongly with EA than with any political ideology. That keeps us more dedicated to the ingroup.
But I could imagine an EA failure mode where, a decade from now, Vox is the most popular “EA” platform and the average EA is liberal first, effective altruist second. This happens if EA becomes synonymous with other, more powerful identity labels—kinda how animal rights and environmentalism could be their own identities, but they’ve mostly been absorbed into the political left.
If apolitical were an official EA virtue, we could easily disown German Lopez on marijuana or Kamala Harris and criminal justice—improving epistemic standards and avoiding making enemies at the same time. Should we adopt it?
This is an interesting essay. My thinking is that “coalition norms”, under which politics operate, trade off instrumental rationality against epistemic rationality. I can argue that it’s morally correct from a consequentialist point of view to tell a lie in order to get my favorite politician elected so they will pass some critical policy. But this is a Faustian bargain in the long run, because it sacrifices the epistemology of the group, and causes the people who have the best arguments against the group’s thinking to leave in disgust or never join in the first place.
I’m not saying EAs shouldn’t join political coalitions. But I feel like we’d be sacrificing a lot if the EA movement began sliding toward coalition norms. If you think some coalition is the best one, you can go off and work with that coalition. Or if you don’t like any of the existing ones, create one of your own, or maybe even join one & try to improve it from the inside.
We should mostly treat political issues like other issues—see what the evidence is, do some modeling, and take sides. There isn’t a clean distinction between what the movement believes and what individuals believe; there are just points of view that are variously more or less popular. If a political issue becomes just as well-backed as, say, Givewell charities, then we should move towards it. In both cases people are allowed to disagree of course.
However, political inquiry must be done to higher epistemic standards, with extra care to avoid alienating people. Vox has fallen below this threshold for years.
2. Just throwing it out there: Should EA embrace being apolitical? As in, possible official core virtue of the EA movement proper: Effective Altruism doesn’t take sides on controversial political issues, though of course individual EAs are free to.
Robin Hanson’s “pulling the rope sideways” analogy has always struck me: In the great society tug-of-war debates on abortion, immigration, and taxes, it’s rarely effective to pick a side and pull. First, you’re one of many, facing plenty of opposition, making your goal difficult to accomplish. But second, if half the country thinks your goal is bad, it very well might be. On the other hand, pushing sideways is easy: nobody’s going to filibuster to prevent you from handing out malaria nets—everybody thinks it’s a good idea.
(This doesn’t mean not involving yourself in politics. 80k writes on improving political decision making or becoming a congressional staffer—they’re both nonpartisan ways to do good in politics.)
If EA were officially apolitical like this, we would benefit by Hanson’s logic: we can more easily achieve our goals without enemies, and we’re more likely to be right. But we’d could also gain credibility and influence in the long run by refusing to enter the political fray.
I think part of EA’s success is because it’s an identity label, almost a third party, an ingroup for people who dislike the Red/Blue identity divide. I’d say most EAs (and certainly the EAs that do the most good) identify much more strongly with EA than with any political ideology. That keeps us more dedicated to the ingroup.
But I could imagine an EA failure mode where, a decade from now, Vox is the most popular “EA” platform and the average EA is liberal first, effective altruist second. This happens if EA becomes synonymous with other, more powerful identity labels—kinda how animal rights and environmentalism could be their own identities, but they’ve mostly been absorbed into the political left.
If apolitical were an official EA virtue, we could easily disown German Lopez on marijuana or Kamala Harris and criminal justice—improving epistemic standards and avoiding making enemies at the same time. Should we adopt it?
This is an interesting essay. My thinking is that “coalition norms”, under which politics operate, trade off instrumental rationality against epistemic rationality. I can argue that it’s morally correct from a consequentialist point of view to tell a lie in order to get my favorite politician elected so they will pass some critical policy. But this is a Faustian bargain in the long run, because it sacrifices the epistemology of the group, and causes the people who have the best arguments against the group’s thinking to leave in disgust or never join in the first place.
I’m not saying EAs shouldn’t join political coalitions. But I feel like we’d be sacrificing a lot if the EA movement began sliding toward coalition norms. If you think some coalition is the best one, you can go off and work with that coalition. Or if you don’t like any of the existing ones, create one of your own, or maybe even join one & try to improve it from the inside.
We should mostly treat political issues like other issues—see what the evidence is, do some modeling, and take sides. There isn’t a clean distinction between what the movement believes and what individuals believe; there are just points of view that are variously more or less popular. If a political issue becomes just as well-backed as, say, Givewell charities, then we should move towards it. In both cases people are allowed to disagree of course.
However, political inquiry must be done to higher epistemic standards, with extra care to avoid alienating people. Vox has fallen below this threshold for years.