I understand your concern. It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.
My model is the reverse. Most people are somewhere between cold and unfeeling, and aggressively egocentric. Moral reflection builds into them some capacity for paying attention to others and cultivating empathy, which at first starts as an intellectual exercise and eventually becomes a deeply ingrained and felt habit that feels natural.
By analogy, you seem to see that moral reflection turns humans into robots. By contrast, I see moral reflection as turning animals into humans. Or think of it like acting. If you’ve ever acted, or read lines for a play in school, you might have experienced that at first, it’s hard to even understand what your character is saying or identify their objectives. After time with the script, actors understand the goal and develop an intellectual understanding of their character and the actions they use to convey emotion. The greatest actors are perhaps method actors, who spend so much time with their character that they actually feel and think naturally like their character. But this takes a lot of time and effort, and seems like it requires starting with a more intellectualized relationship with their character.
As I see it, this is pretty much how we develop our adult personalities and figure out how to fit into the social world. Maybe I’m wrong—maybe most people have a nice well-adjusted sense of fellow feeling and empathy from the jump, and I’m the weird one who’s had to work on it. If so, I think that my approach has been successful, because I think most people I know see me as an unusually empathic and emotionally aware person.
I can think of examples of people with all four combinations of moral systematization and emapthy: high/high, high/low, low/high, and low/low. I’m really not sure how the correlations run.
Overall, this seems like a question for psychology rather than a question for philosophy, and if you’re really concerned that consequentialism will turn us into calculators, I’d be most interested to see that argument referring to the psych literature rather than the philosophy literature.
It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.
Moral calculation (and faking it ’til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:
Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness...
If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz’s Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading “The New Architecture of Romantic Choice or the Disorganization of the Will” (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you’re interested.
I am really specifically interested in the claim you promote that moral calculation interferes on empathic development, rather than contributes to it or is neutral, on net. I don’t expect there’s much lit studying that, but that’s kind of my point. Why would we fee so confident that this or that morality has that or this psychological effect? I have a sense of how my morality has affected me, and we can speculate, but can we really claim to be going beyond that?
I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don’t expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don’t.
If I’m right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn’t the community consider taking some steps to curb that problematic tendency?
But also: if the EA community will only correct the flaws in itself that it can measure then… good luck. Seems short-sighted to me.
I may not have the data to back up my hypothesis, but it’s also not as if I pulled this out of thin air. And I’m not the first to find this hypothesis plausible.
No worries!
I understand your concern. It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.
My model is the reverse. Most people are somewhere between cold and unfeeling, and aggressively egocentric. Moral reflection builds into them some capacity for paying attention to others and cultivating empathy, which at first starts as an intellectual exercise and eventually becomes a deeply ingrained and felt habit that feels natural.
By analogy, you seem to see that moral reflection turns humans into robots. By contrast, I see moral reflection as turning animals into humans. Or think of it like acting. If you’ve ever acted, or read lines for a play in school, you might have experienced that at first, it’s hard to even understand what your character is saying or identify their objectives. After time with the script, actors understand the goal and develop an intellectual understanding of their character and the actions they use to convey emotion. The greatest actors are perhaps method actors, who spend so much time with their character that they actually feel and think naturally like their character. But this takes a lot of time and effort, and seems like it requires starting with a more intellectualized relationship with their character.
As I see it, this is pretty much how we develop our adult personalities and figure out how to fit into the social world. Maybe I’m wrong—maybe most people have a nice well-adjusted sense of fellow feeling and empathy from the jump, and I’m the weird one who’s had to work on it. If so, I think that my approach has been successful, because I think most people I know see me as an unusually empathic and emotionally aware person.
I can think of examples of people with all four combinations of moral systematization and emapthy: high/high, high/low, low/high, and low/low. I’m really not sure how the correlations run.
Overall, this seems like a question for psychology rather than a question for philosophy, and if you’re really concerned that consequentialism will turn us into calculators, I’d be most interested to see that argument referring to the psych literature rather than the philosophy literature.
Moral calculation (and faking it ’til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:
If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz’s Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading “The New Architecture of Romantic Choice or the Disorganization of the Will” (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you’re interested.
I am really specifically interested in the claim you promote that moral calculation interferes on empathic development, rather than contributes to it or is neutral, on net. I don’t expect there’s much lit studying that, but that’s kind of my point. Why would we fee so confident that this or that morality has that or this psychological effect? I have a sense of how my morality has affected me, and we can speculate, but can we really claim to be going beyond that?
I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don’t expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don’t.
If I’m right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn’t the community consider taking some steps to curb that problematic tendency?
What you have is a hypothesis. You could gather data to test it. But we should not take any significant action on the basis of your hypothesis.
Fair enough!
But also: if the EA community will only correct the flaws in itself that it can measure then… good luck. Seems short-sighted to me.
I may not have the data to back up my hypothesis, but it’s also not as if I pulled this out of thin air. And I’m not the first to find this hypothesis plausible.