I appreciate the efforts to try and bridge two projects you think are valuable. A few thoughts/comments/disagreements:
1. One way to read this seems to me like it could boil down to: if you like EA, but also want some more metacrisis/sensemaking/systems thinking than what EA typically offers, then that’s us. Come say hi.
2. I feel like there’s some irony here where EA conversation norms tend towards very direct communication, and sensemaking folks tend to speak in a more indirect way. In pitching integral altruism I can’t help but get the feeling it is framed in fairly indirect language at times. It’s hard to name the exact dynamic but I found myself working hard to understand parts of what this paragraph is trying to say (maybe that’s just me):
Psychological, emotional and spiritual development can help us cultivate a genuine desire for the wellbeing of others, resulting in altruism grounded in truth rather than being driven by guilt or pride. Such growth can also improve our epistemics by shining light on What’s Going On For Us and inspire action by deeply connecting us to the value we’re fighting for.
3. Some of these points seem surprising to include as what is added by integral altruism as they seem to me as a regular part of EA discourse. I’m thinking about the sections that discuss valuing other things in life besides impact, and that inner work can lead to more impact.
4. I think a big decision point here is whether or not the merits of integral altruism will be argued on the territory of EA assumptions or not, and this post seems to move between the two. For example, you make the claim that there are real downsides to seeing x-risk in isolation rather than in the way it is interconnected with other problems. This seems big and important if true, and seems like something that could be argued comfortably within the framework of EA norms. I appreciate that puts the burden on you, but if you persuade folks here, I imagine that would be a big win for everyone. FWIW whenever I’ve listened to folks talk about the metacrisis I’ve literally not been able to understand the arguments. Could be a huge service to try and make the case for the metacrisis in EA friendly language.
On point 2 - yes, it is a fair criticism of both int/a and the sensemaking folks that what we’re saying feel indirect. The challenge as I see it is that the things the sensemaking world are pointing at are just a lot harder to put in very explicit terms. That doesn’t necessarily mean stuff like the metacrisis doesn’t exist, it could just mean that its harder to point at/analyze/get traction on.
I’ve heard metacrisis people describe EA as ‘searching for the keys under the lamppost’, in that EA focuses on the things that can be explicitly stated and modeled, which is not necessarily the same as the set of problems that exist. They would argue that instead of continuing to search under the lamppost, maybe we should build new lampposts, or buy a torch, or whatever. I don’t fully condone this but it’s a good intuition pump for where they’re coming from.
Part of int/a’s ambition in building this bridge is to try to caste sensemaking ideas in more direct EA-brained terms (like this rough first attempt), but it’s tough and a work in progress!
On point 3 - sure, a lot of what we talk about here is already in the EA discourse to varying degrees. I think the distinction is the degree in which the value is emphasized and practiced. For example, the element of ‘personal fit’ is a meme that exists within EA, but in the 80k guide feels like a footnote. In contrast, in int/a we have an intention for personal fit to be quite central and have it inform the structure and emergent behavior of the movement.
On point 4 - yea great point. Ultimately it would be cool to examine whether int/a makes sense on both EA territory and other territories.
I appreciate the efforts to try and bridge two projects you think are valuable. A few thoughts/comments/disagreements:
1. One way to read this seems to me like it could boil down to: if you like EA, but also want some more metacrisis/sensemaking/systems thinking than what EA typically offers, then that’s us. Come say hi.
2. I feel like there’s some irony here where EA conversation norms tend towards very direct communication, and sensemaking folks tend to speak in a more indirect way. In pitching integral altruism I can’t help but get the feeling it is framed in fairly indirect language at times. It’s hard to name the exact dynamic but I found myself working hard to understand parts of what this paragraph is trying to say (maybe that’s just me):
3. Some of these points seem surprising to include as what is added by integral altruism as they seem to me as a regular part of EA discourse. I’m thinking about the sections that discuss valuing other things in life besides impact, and that inner work can lead to more impact.
4. I think a big decision point here is whether or not the merits of integral altruism will be argued on the territory of EA assumptions or not, and this post seems to move between the two. For example, you make the claim that there are real downsides to seeing x-risk in isolation rather than in the way it is interconnected with other problems. This seems big and important if true, and seems like something that could be argued comfortably within the framework of EA norms. I appreciate that puts the burden on you, but if you persuade folks here, I imagine that would be a big win for everyone. FWIW whenever I’ve listened to folks talk about the metacrisis I’ve literally not been able to understand the arguments. Could be a huge service to try and make the case for the metacrisis in EA friendly language.
Thanks for the thoughtful response Elliot!
On point 2 - yes, it is a fair criticism of both int/a and the sensemaking folks that what we’re saying feel indirect. The challenge as I see it is that the things the sensemaking world are pointing at are just a lot harder to put in very explicit terms. That doesn’t necessarily mean stuff like the metacrisis doesn’t exist, it could just mean that its harder to point at/analyze/get traction on.
I’ve heard metacrisis people describe EA as ‘searching for the keys under the lamppost’, in that EA focuses on the things that can be explicitly stated and modeled, which is not necessarily the same as the set of problems that exist. They would argue that instead of continuing to search under the lamppost, maybe we should build new lampposts, or buy a torch, or whatever. I don’t fully condone this but it’s a good intuition pump for where they’re coming from.
Part of int/a’s ambition in building this bridge is to try to caste sensemaking ideas in more direct EA-brained terms (like this rough first attempt), but it’s tough and a work in progress!
On point 3 - sure, a lot of what we talk about here is already in the EA discourse to varying degrees. I think the distinction is the degree in which the value is emphasized and practiced. For example, the element of ‘personal fit’ is a meme that exists within EA, but in the 80k guide feels like a footnote. In contrast, in int/a we have an intention for personal fit to be quite central and have it inform the structure and emergent behavior of the movement.
On point 4 - yea great point. Ultimately it would be cool to examine whether int/a makes sense on both EA territory and other territories.