You’re describing integral altruism as broader than EA, but if I understand you correctly, it’s also narrower in many ways. Some examples:
Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.
–> Effective altruism doesn’t take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various degrees.
take radical uncertainty seriously
–> EAs already do this to various degrees. If integral altruists take this really seriously, they are a subset of EAs in this regard.
altruism grounded in truth rather than being driven by guilt or pride
–> EA doesn’t say where your altruistic motivation should be grounded in. All of the reasons you list are considered viable (although people of course disagree to what degree they are conducive/to be encouraged).
Some of the things you describe (especially the ‘different ways of knowing’) seem to sit more outside of what is common within EA. There it seems more like integral altruism actually is broader.
Overall I’m not completely sure whether integral altruism is a way of doing effective altruism differently, or a competing (though often overlapping) world view.
Yes, the question of whether int/a is a subset of EA, overlapping, or something totally different has been a big point of discussion, and we haven’t found a clean answer.
You are right that EA in some sense already contains a lot of the things int/a is excited about (especially in terms of the official written principles being quite broad), but perhaps the real difference is what is emphasized in practice.
For example:
Effective altruism doesn’t take a position on whether we are in conflict with the natural unfolding of the universe.
Yea, EA doesn’t explicitly say anything about that, but what we’re pointing at is perhaps a cultural or semi-conscious current that pervades a lot of EA work (possibly this is more relevant to rationalism than EA). This line was inspired by in part by Joe Carlsmith’s An Even Deeper Atheism, which points out a current underlying a lot of EA/rat/AI safety that is born out of a deep mistrust of everything (might not be doing the essay justice but that’s the general direction).
I’m not necessarily saying this current is bad, rather that we should have an awareness of it and be able to step outside of that frame of mind when it is not helping us, and integrate different frames. The hope is that int/a can more explicitly/consciously find the right balance between the yang-y mistrusting the universe vibe and the yin-y trusting the universe vibe.
Thanks for writing this!
You’re describing integral altruism as broader than EA, but if I understand you correctly, it’s also narrower in many ways. Some examples:
–> Effective altruism doesn’t take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various degrees.
–> EAs already do this to various degrees. If integral altruists take this really seriously, they are a subset of EAs in this regard.
–> EA doesn’t say where your altruistic motivation should be grounded in. All of the reasons you list are considered viable (although people of course disagree to what degree they are conducive/to be encouraged).
Some of the things you describe (especially the ‘different ways of knowing’) seem to sit more outside of what is common within EA. There it seems more like integral altruism actually is broader.
Overall I’m not completely sure whether integral altruism is a way of doing effective altruism differently, or a competing (though often overlapping) world view.
Thanks Tobias, some good threads to pull here!
Yes, the question of whether int/a is a subset of EA, overlapping, or something totally different has been a big point of discussion, and we haven’t found a clean answer.
You are right that EA in some sense already contains a lot of the things int/a is excited about (especially in terms of the official written principles being quite broad), but perhaps the real difference is what is emphasized in practice.
For example:
Yea, EA doesn’t explicitly say anything about that, but what we’re pointing at is perhaps a cultural or semi-conscious current that pervades a lot of EA work (possibly this is more relevant to rationalism than EA). This line was inspired by in part by Joe Carlsmith’s An Even Deeper Atheism, which points out a current underlying a lot of EA/rat/AI safety that is born out of a deep mistrust of everything (might not be doing the essay justice but that’s the general direction).
I’m not necessarily saying this current is bad, rather that we should have an awareness of it and be able to step outside of that frame of mind when it is not helping us, and integrate different frames. The hope is that int/a can more explicitly/consciously find the right balance between the yang-y mistrusting the universe vibe and the yin-y trusting the universe vibe.