Good points. I agree that EA’s message if often framed in a way that can seem alienating to people who don’t share all its assumptions. And I agree that the people who don’t share all the assumptions are not necessarily being irrational.
Some people might argue that non-utilitarians will become utilitarian if they become more rational.
FWIW, I think there’s indeed a trend. Teaching rationality can be kind of a dick move (only in a specific sense) because it forces you to think about consequentialist goals and opportunity costs, which is not necessarily good for your self-image if you’re not able to look back on huge accomplishments or promising future prospects. As long as your self-image as a “morally good person” is tied to common-sense morality, you can do well by just not being an asshole to the people around you. And where common-sense morality is called into question, you can always rationalize as long as you’re not yet being forced to look too closely. So people will say things like “I’m an emotional person” in order to be able to ignore all these arguments these “rationalists” are making, which usually end with “This is why you should change your life and donate”. Or they adopt a self-image as someone who is “just not into that philosophy-stuff” and thus will just not bother to think about it anymore once the discussions get too far.
LW or EA discourse breaks down the alternatives. Once it’s too late, once your brain spots blatant attempts at rationalizing, this forces people to either self-identify as (effective) altruists or not, or at least state what %age of your utility function corresponds to which. And self-identifying as someone who really doesn’t care about people far away, as opposed to someone who still cares but “community comes first” and “money often doesn’t reach its destination anyway” and “isn’t it all so uncertain and stop with these unrealistic thought experiments already!” and “why are these EAs so dogmatic?”, is usually much harder. (At least for those who are empathetic/social/altruistic, or those who are in search of moral meaning to their lives).
I suspect that this is why rationality doesn’t correlate with making people happier. It’s easier to be happy if your goal is to do alright in life and not be an asshole. It gets harder if your goal is to help fix this whole mess that includes wild animals suffering and worries about the fate of the galaxy.
Arguably, people are being quite rational, on an intuitive level, by not being able to tell you what their precise consequentialist goal is. They’re satisficing, and it’s all good for them, so why make things more complicated? A heretic could ask: Why create a billion new ways in which they can fail to reach their goals? – Maybe the best thing is to just never think about goals that are hard to reach. Edit: Just to be clear, I’m not saying people shouldn’t have consequentialist goals, I’m just pointing out that the picture as I understand it is kind of messy.
Handling the inherent demandingness of consequentialist goals is a big challenge imo, for EAs themselves as well as for making the movement more broadly appealing. I have written some thoughts on this here.
Good points. I agree that EA’s message if often framed in a way that can seem alienating to people who don’t share all its assumptions. And I agree that the people who don’t share all the assumptions are not necessarily being irrational.
FWIW, I think there’s indeed a trend. Teaching rationality can be kind of a dick move (only in a specific sense) because it forces you to think about consequentialist goals and opportunity costs, which is not necessarily good for your self-image if you’re not able to look back on huge accomplishments or promising future prospects. As long as your self-image as a “morally good person” is tied to common-sense morality, you can do well by just not being an asshole to the people around you. And where common-sense morality is called into question, you can always rationalize as long as you’re not yet being forced to look too closely. So people will say things like “I’m an emotional person” in order to be able to ignore all these arguments these “rationalists” are making, which usually end with “This is why you should change your life and donate”. Or they adopt a self-image as someone who is “just not into that philosophy-stuff” and thus will just not bother to think about it anymore once the discussions get too far.
LW or EA discourse breaks down the alternatives. Once it’s too late, once your brain spots blatant attempts at rationalizing, this forces people to either self-identify as (effective) altruists or not, or at least state what %age of your utility function corresponds to which. And self-identifying as someone who really doesn’t care about people far away, as opposed to someone who still cares but “community comes first” and “money often doesn’t reach its destination anyway” and “isn’t it all so uncertain and stop with these unrealistic thought experiments already!” and “why are these EAs so dogmatic?”, is usually much harder. (At least for those who are empathetic/social/altruistic, or those who are in search of moral meaning to their lives).
I suspect that this is why rationality doesn’t correlate with making people happier. It’s easier to be happy if your goal is to do alright in life and not be an asshole. It gets harder if your goal is to help fix this whole mess that includes wild animals suffering and worries about the fate of the galaxy.
Arguably, people are being quite rational, on an intuitive level, by not being able to tell you what their precise consequentialist goal is. They’re satisficing, and it’s all good for them, so why make things more complicated? A heretic could ask: Why create a billion new ways in which they can fail to reach their goals? – Maybe the best thing is to just never think about goals that are hard to reach. Edit: Just to be clear, I’m not saying people shouldn’t have consequentialist goals, I’m just pointing out that the picture as I understand it is kind of messy.
Handling the inherent demandingness of consequentialist goals is a big challenge imo, for EAs themselves as well as for making the movement more broadly appealing. I have written some thoughts on this here.