I don’t know if it helps, but your “logical” conclusions are far more likely to be wildly wrong than your “emotional” responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like “hey, these creatures that are similar to me are suffering, and that sucks!”.
Perhaps the dissonance is less irrational than it seems...
Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.
I don’t know if it helps, but your “logical” conclusions are far more likely to be wildly wrong than your “emotional” responses. Your logical views depend heavily on speculative factors like how likely AI tech is, or how impactful it will be, or what the best philosophy of utility is. Whereas the view on animals depends on comparitively few assumptions, like “hey, these creatures that are similar to me are suffering, and that sucks!”.
Perhaps the dissonance is less irrational than it seems...
Yes! This is helpful. I think one of the main places where I get caught up is taking expected value calculations very seriously even though they are wildly speculative; it seems like there is a very small chance that I might make a huge difference on an issue that ends up being absurdly important, and so it is hard to use my intuition on this kind of thing, whereas my intuitions very clearly help me with things that are close by and hence more easier to see I am doing some good but more difficult to make wild speculations that I might be having a hugely positive impact. So I guess part of the issue is to what degree I depend on these wildly speculative EV calculations, I feel like I really want to maximize impact, yet it is always a tenuous balancing act with so much uncertainty.