Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?
Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”
But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.”
OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized.
Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.
I don’t know if it helps, but your “logical” conclusions are far more likely to be wildly wrong than your “emotional” responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like “hey, these creatures that are similar to me are suffering, and that sucks!”.
Perhaps the dissonance is less irrational than it seems...
Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.
I relate to that a lot, and I want to share how I resolved some of this tension. You currently allow your heart to only say “I want to reduce suffering and increase happiness” and then your brain takes over and optimizes, ignoring everything else your heart is saying. But it’s an arbitrary choice to only listen to the most abstract version of what the heart is saying. You could also allow your heart to be more specific like “I want to help all the animals!”, or even “I want to help this specific animal!” and then let your brain figure out the best way to do that. The way I see it, there is no objectively correct choice here. So I alternate on how specific I allow my heart to be.
In practice, it can look like splitting your donations between charities that give you a warm, fuzzy feeling, and charities that seem most cost-effective when you coldly calculate, as advised in Purchase Fuzzies and Utilons Separately. Here is an example of someone doing this. Unfortunately, it can be much more difficult to do this when you contribute with work rather than donations.
Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.
Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?
Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”
But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.”
OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized.
Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.
I don’t know if it helps, but your “logical” conclusions are far more likely to be wildly wrong than your “emotional” responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like “hey, these creatures that are similar to me are suffering, and that sucks!”.
Perhaps the dissonance is less irrational than it seems...
Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.
I relate to that a lot, and I want to share how I resolved some of this tension. You currently allow your heart to only say “I want to reduce suffering and increase happiness” and then your brain takes over and optimizes, ignoring everything else your heart is saying. But it’s an arbitrary choice to only listen to the most abstract version of what the heart is saying. You could also allow your heart to be more specific like “I want to help all the animals!”, or even “I want to help this specific animal!” and then let your brain figure out the best way to do that. The way I see it, there is no objectively correct choice here. So I alternate on how specific I allow my heart to be.
In practice, it can look like splitting your donations between charities that give you a warm, fuzzy feeling, and charities that seem most cost-effective when you coldly calculate, as advised in Purchase Fuzzies and Utilons Separately. Here is an example of someone doing this. Unfortunately, it can be much more difficult to do this when you contribute with work rather than donations.
Mmm yeah, I really like this compromise, it leaves room for being human, but indeed, I’m thinking more about career currently. Since I’ve struggled to find a career that is impactful and I am good at, I’m thinking I might actually choose a career that is a relatively stable normal job that I like (Like therapist for enlightened people/people who meditate), and then I can use my free time to work on projects that could be maximally massively impactful.