Yes, the question of whether int/a is a subset of EA, overlapping, or something totally different has been a big point of discussion, and we haven’t found a clean answer.
You are right that EA in some sense already contains a lot of the things int/a is excited about (especially in terms of the official written principles being quite broad), but perhaps the real difference is what is emphasized in practice.
For example:
Effective altruism doesn’t take a position on whether we are in conflict with the natural unfolding of the universe.
Yea, EA doesn’t explicitly say anything about that, but what we’re pointing at is perhaps a cultural or semi-conscious current that pervades a lot of EA work (possibly this is more relevant to rationalism than EA). This line was inspired by in part by Joe Carlsmith’s An Even Deeper Atheism, which points out a current underlying a lot of EA/rat/AI safety that is born out of a deep mistrust of everything (might not be doing the essay justice but that’s the general direction).
I’m not necessarily saying this current is bad, rather that we should have an awareness of it and be able to step outside of that frame of mind when it is not helping us, and integrate different frames. The hope is that int/a can more explicitly/consciously find the right balance between the yang-y mistrusting the universe vibe and the yin-y trusting the universe vibe.
Thanks Tobias, some good threads to pull here!
Yes, the question of whether int/a is a subset of EA, overlapping, or something totally different has been a big point of discussion, and we haven’t found a clean answer.
You are right that EA in some sense already contains a lot of the things int/a is excited about (especially in terms of the official written principles being quite broad), but perhaps the real difference is what is emphasized in practice.
For example:
Yea, EA doesn’t explicitly say anything about that, but what we’re pointing at is perhaps a cultural or semi-conscious current that pervades a lot of EA work (possibly this is more relevant to rationalism than EA). This line was inspired by in part by Joe Carlsmith’s An Even Deeper Atheism, which points out a current underlying a lot of EA/rat/AI safety that is born out of a deep mistrust of everything (might not be doing the essay justice but that’s the general direction).
I’m not necessarily saying this current is bad, rather that we should have an awareness of it and be able to step outside of that frame of mind when it is not helping us, and integrate different frames. The hope is that int/a can more explicitly/consciously find the right balance between the yang-y mistrusting the universe vibe and the yin-y trusting the universe vibe.