EA: Renaissance or Diaspora?
@Elizabeth and I recently recorded a conversation of ours that we’re hoping becomes a whole podcast series. The original premise is that we were trying to convince each other about whether we should both be EAs or both not be EAs. (She quit the movement earlier this year when she felt that her cries of alarm kept falling on deaf ears; I never left.)
Audio recording (35 min)
Some highlights:
@Elizabeth’s story of falling in love with, trying to change, and then falling out of love with Effective Altruism. That middle part draws heavily on past posts of hers, including EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem and Truthseeking is the ground in which other principles grow
I told Elizabeth that I would also have left when she did (if I had had her experience).
I claimed that EA is ready for a Renaissance.
We both agreed that I should ‘check the integrity of Hogwarts’ by challenging EA to live up to my standards of integrity, and that I should also leave the movement if I give up on EA meeting that challenge (as Elizabeth did).
If you like the podcast or want to continue the conversation, tell us about it in the comments (or on LW if you want to make sure Elizabeth sees it), and consider donating toward future episodes.
Thanks for the interesting conversation! Some scattered questions/observations:
Your conversation reminds me of the debate about whether EA should be cause-first or member-first.
My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, I’d call myself EA.
Elizabeth’s self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
This might explain the difference between my and Elizabeth’s attitudes about the importance of some EAs claiming that veganism doesn’t entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but I’m far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabeth’s perspective, this is so important. Do you think this is a fair characterization?
I’d love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganism’s health tradeoffs relative to vegan advocacy:
If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism’s health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganism’s health tradeoffs relative to vegan advocacy.
By analogy, this feels like sounding an alarm because EA’s kidney donation advocates haven’t sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isn’t kidney donation clearly the moral imperative?
I doubt that Elizabeth—or a meaningful number of her potential readers—are considering whether to be associated with anti-vegan advocates on Facebook or any movement related to them. I read the discussion as mainly about epistemics and integrity (these words collectively appear ~30 times in the transcript) rather than object-level harms.
I think it’s generally appropriate to be more concerned about policing epistemics and integrity in your own social movement than in others. This is in part about tractability—do we have any reason to think any anti-vegan activist movement on Facebook cares about its epistemics? If they do, do any of us have a solid reason to believe we would be effective in improving those epistemics?
It’s OK to not want to affiliate with a movement whose epistemics and integrity you judge to be inadequate. The fact that there are other movements with worse epistemics and integrity out there isn’t particularly relevant to that judgment call.
It’s unclear whether anti-vegan activists on Facebook are even part of a broader epistemic community. EAs are, so an erosion of EA epistemic norms and integrity is reasonably likely to cause broader problems.
In particular, the stuff Elizabeth is concerned about gives off the aroma of ends-justify-the-means thinking to me at points. False or misleading presentations, especially ones that pose a risk of meaningful harm to the listener, are not an appropriate means of promoting dietary change. [1] Moreover, ends-justify-the-means rationalization is a particular risk for EAs, as we painfully found out ~2 years ago.
I recognize there may be object-level disagreement here as to whether a given presentation is false, misleading, or poses a risk of meaningful harm.
Yes, I would even say that the original comment (which I intend to reply to next) seems to suffer from ends-justify-the-means-logic as well (e.g. prioritizing “shutting up and multiplying” such as “shipping resources to the best interventions” over “being honest about health effects”).
I like the distinction of cause-first vs member-first; thanks for that concept. Thinking about that in this context, I’m inspired to suggest a different cleavage that works better for my worldview on EA: Alignment/Integrity-first vs. Power/Impact-first.
I believe that for basically all institutions in the 21st century, alignment should be the highest priority, and power should only become the top priority to the extent that the institution believes that alignment at that power level has been solved.
By this splitting, it seems clear that Elizabeth’s reported actions are prioritizing alignment over impact.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
I believe that until we learn how to prioritize Alignment over Impact, we aren’t ready for as much power as we had at SBF’s height.
Thanks for this; I agree that “integrity vs impact” is a more precise cleavage point for this conversation than “cause-first vs member-first”.
Unhelpfully, I’d say it depends on the tradeoff’s details. I certainly wouldn’t advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I’d currently prefer the marginal 1M be given to EA Funds’ Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA’s epistemics.
It seems to me that I think the EA community has a lot more “alignment/integrity” than you do. This could arise from empirical disagreements, different definitions of “alignment/integrity”, and/or different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn’t have tradeoffs, and weren’t corrected by other community members. While I’d prefer people say true things to false things, especially when they affect people’s health, this just doesn’t feel important enough to update upon. (I’ve also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that it’s clear that there are deep systemic issues with the community’s epistemics. If there’s a lot more evidence on this which I haven’t seen, I’d love to hear about it!
I might say kidney donation is a moral imperative (or good) if we consider only the effects on your welfare and the effects on the welfare of the beneficiaries. But when you consider indirect effects, things are less clear. There are effects on other people, nonhuman animals (farmed and wild), your productivity and time (which affects your EA work or income and donations), your motivation and your values. For an EA, productivity and time, motivation and values seem most important.
EDIT: And the same goes for veganism.
What do you mean by moral imperative?
I notice that I “believe in” minimum moral standards (like a code of conduct or laws) but not what I call moral imperatives (in X situation, I have no choice if I want to remain in good moral standing).
I also don’t believe in requiring organ donation as part of a minimum moral standard, which is probably related to my objection to the concept of “moral imperative”.
Thank you for sharing this Timothy. I left a long comment on the LW version of the post. I’m happy to talk about this more with you or Elizabeth — if you’re interested, you’re welcome to reach out to me directly.