This would be my practical question as well, for the following reasons.
I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only “satisfice” (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).
So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be satisficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.
In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their relational value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.
If others have found ways to reconcile infinite optimizing goals with satisficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)
Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I don’t know if you ever ended up writing more about this, but if you did, I hope you’d consider publishing it—I think that could help a lot of people!
Hi Ann, thanks for the reply! I agree that self-compassion can be an important piece of the puzzle for many people with an EA outlook.
I am definitely still working on reframing EA-related ideas and motivations so that the default language would not so easily lead to ‘EA guilt’ and some other problems. Lately I’ve been focusing on more general alternatives to ‘compassion’, because people often have different (and strong) preexisting notions of what compassion means, and so I’m not sure if compassion will serve as the kind of integrative ‘bridge concept’ that I’m looking for to help solve many (e.g. terminological) problems simultaneously.
So unfortunately I don’t have much (quickly publishable) stuff on compassion specifically, having been rotating abstract alternatives like ‘dissonance minimalism’ or ‘complex harmonization’. But who knows, maybe I’ll end up relating things via compassion again, at some point!
I’m not up-to-date on what the existing EA-memesphere writings on (self-)compassion are, but I love the Replacing Guilt series by Nate Soares (http://mindingourway.com/guilt), often mentioned on LW/EA. It has also been narrated as a podcast by Gianluca Truda. I believe it is a good recommendation for anyone who is feeling overwhelmed by the ambitions of EA.
This would be my practical question as well, for the following reasons.
I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only “satisfice” (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).
So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be satisficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.
In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their relational value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.
If others have found ways to reconcile infinite optimizing goals with satisficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)
Hi Teo! I know your comment was from a few years ago, but I was so excited to see someone else in EA talk about self-compassion. Self-compassion is one of the main things that lets me be passionate about EA and have a maximalist moral mindset without spiraling into guilt, and I think it should be much more well-known in the community. I don’t know if you ever ended up writing more about this, but if you did, I hope you’d consider publishing it—I think that could help a lot of people!
Hi Ann, thanks for the reply! I agree that self-compassion can be an important piece of the puzzle for many people with an EA outlook.
I am definitely still working on reframing EA-related ideas and motivations so that the default language would not so easily lead to ‘EA guilt’ and some other problems. Lately I’ve been focusing on more general alternatives to ‘compassion’, because people often have different (and strong) preexisting notions of what compassion means, and so I’m not sure if compassion will serve as the kind of integrative ‘bridge concept’ that I’m looking for to help solve many (e.g. terminological) problems simultaneously.
So unfortunately I don’t have much (quickly publishable) stuff on compassion specifically, having been rotating abstract alternatives like ‘dissonance minimalism’ or ‘complex harmonization’. But who knows, maybe I’ll end up relating things via compassion again, at some point!
I’m not up-to-date on what the existing EA-memesphere writings on (self-)compassion are, but I love the Replacing Guilt series by Nate Soares (http://mindingourway.com/guilt), often mentioned on LW/EA. It has also been narrated as a podcast by Gianluca Truda. I believe it is a good recommendation for anyone who is feeling overwhelmed by the ambitions of EA.