I like the distinction of cause-first vs member-first; thanks for that concept. Thinking about that in this context, I’m inspired to suggest a different cleavage that works better for my worldview on EA: Alignment/Integrity-first vs. Power/Impact-first.
I believe that for basically all institutions in the 21st century, alignment should be the highest priority, and power should only become the top priority to the extent that the institution believes that alignment at that power level has been solved.
By this splitting, it seems clear that Elizabeth’s reported actions are prioritizing alignment over impact.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
I believe that until we learn how to prioritize Alignment over Impact, we aren’t ready for as much power as we had at SBF’s height.
Thanks for this; I agree that “integrity vs impact” is a more precise cleavage point for this conversation than “cause-first vs member-first”.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
Unhelpfully, I’d say it depends on the tradeoff’s details. I certainly wouldn’t advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I’d currently prefer the marginal 1M be given to EA Funds’ Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA’s epistemics.
It seems to me that I think the EA community has a lot more “alignment/integrity” than you do. This could arise from empirical disagreements, different definitions of “alignment/integrity”, and/or different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn’t have tradeoffs, and weren’t corrected by other community members. While I’d prefer people say true things to false things, especially when they affect people’s health, this just doesn’t feel important enough to update upon. (I’ve also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that it’s clear that there are deep systemic issues with the community’s epistemics. If there’s a lot more evidence on this which I haven’t seen, I’d love to hear about it!
I like the distinction of cause-first vs member-first; thanks for that concept. Thinking about that in this context, I’m inspired to suggest a different cleavage that works better for my worldview on EA: Alignment/Integrity-first vs. Power/Impact-first.
I believe that for basically all institutions in the 21st century, alignment should be the highest priority, and power should only become the top priority to the extent that the institution believes that alignment at that power level has been solved.
By this splitting, it seems clear that Elizabeth’s reported actions are prioritizing alignment over impact.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
I believe that until we learn how to prioritize Alignment over Impact, we aren’t ready for as much power as we had at SBF’s height.
Thanks for this; I agree that “integrity vs impact” is a more precise cleavage point for this conversation than “cause-first vs member-first”.
Unhelpfully, I’d say it depends on the tradeoff’s details. I certainly wouldn’t advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, I’d currently prefer the marginal 1M be given to EA Funds’ Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EA’s epistemics.
It seems to me that I think the EA community has a lot more “alignment/integrity” than you do. This could arise from empirical disagreements, different definitions of “alignment/integrity”, and/or different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/integrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesn’t have tradeoffs, and weren’t corrected by other community members. While I’d prefer people say true things to false things, especially when they affect people’s health, this just doesn’t feel important enough to update upon. (I’ve also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that it’s clear that there are deep systemic issues with the community’s epistemics. If there’s a lot more evidence on this which I haven’t seen, I’d love to hear about it!