Suppose there is some kind of new moral truth, but only one person knows it. (Arguably, there will always be a first person. New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what “harm” means. )
This person may well adopt an affectively expensive point of view, which won’t make any sense to their peers (or may make all too much sense). Their peers may have their feelings hurt by this new moral truth, and retaliate against them. The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth’s dissonance with the status quo, which will be objected to by other peers, who will pressure that person to give up their moral truth and wear away at them to try to “save” them. In the process of resisting the “caring peer”, the new-moral-truth person does things that hurt the “caring peer”’s feelings.
There are at least two ideologies at play here. (The new one and the old one, or the old ones if there are more than one.) So we’re looking at a battle between ideologies, played out on the field of accounting personal harm. Which ideology does a norm of honoring the least-cost principle favor? Wouldn’t all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn’t adopted their new ideology in the first place? So the “court” (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo’s point of view and enforce adherence to the status quo. But if there is such a thing as moral truth, then we are better off hearing it, even if it’s unpopular.
Perhaps the least-cost principle is good, but there should be some provision in a “court”for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.
These are all great considerations! However, I think that it’s perfectly consistent with my framework to analyze the total costs to avoiding a harm, including harms to society from discouraging true beliefs or chilling the reasoned exchange of ideas. So in the case you imagine, there’s a big societal moral cost from the peers’ reactions, which they therefore have good reason to try to minimize.
This generalizes to the case where we don’t know whose moral ideas are true by “penalizing” (or at least failing to indulge) psychological frameworks that impede moral discourse and reasoning (perhaps this is one way of understanding the First Amendment).
Suppose there is some kind of new moral truth, but only one person knows it. (Arguably, there will always be a first person. New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what “harm” means. )
This person may well adopt an affectively expensive point of view, which won’t make any sense to their peers (or may make all too much sense). Their peers may have their feelings hurt by this new moral truth, and retaliate against them. The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth’s dissonance with the status quo, which will be objected to by other peers, who will pressure that person to give up their moral truth and wear away at them to try to “save” them. In the process of resisting the “caring peer”, the new-moral-truth person does things that hurt the “caring peer”’s feelings.
There are at least two ideologies at play here. (The new one and the old one, or the old ones if there are more than one.) So we’re looking at a battle between ideologies, played out on the field of accounting personal harm. Which ideology does a norm of honoring the least-cost principle favor? Wouldn’t all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn’t adopted their new ideology in the first place? So the “court” (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo’s point of view and enforce adherence to the status quo. But if there is such a thing as moral truth, then we are better off hearing it, even if it’s unpopular.
Perhaps the least-cost principle is good, but there should be some provision in a “court”for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.
These are all great considerations! However, I think that it’s perfectly consistent with my framework to analyze the total costs to avoiding a harm, including harms to society from discouraging true beliefs or chilling the reasoned exchange of ideas. So in the case you imagine, there’s a big societal moral cost from the peers’ reactions, which they therefore have good reason to try to minimize.
This generalizes to the case where we don’t know whose moral ideas are true by “penalizing” (or at least failing to indulge) psychological frameworks that impede moral discourse and reasoning (perhaps this is one way of understanding the First Amendment).