I feel like it’s more relevant what a person actually believes than whether they think of themselves as uncertain. Moral certainty seems directly problematic (in terms of risks of recklessness and unilateral action) only when it comes together with moral realism: If you think you know the single correct moral theory, you’ll consider yourself justified to override other people’s moral beliefs and thwart the goals they’ve been working towards.
By contrast, there seems to me to be no clear link from “anti-realist moral certainty in some subjectivist axiology” to “considers themselves justified to override other people’s life goals.” On the contrary, unless someone has an anti-social personality to begin with, it seems only intuitive/natural to me to go from “anti-realism about morality is true” to “we should probably treat moral disagreements between morally certain individuals more like we’d ideally treat political disagreements.” How would we want to ideally treat political disagreements? I’d say we want to keep political polarization at a low, accept that there’ll be view differences, and we’ll agree to play fair and find positive-sum compromises. If some political faction goes around thinking it’s okay to sabotage others or use their power unfairly (e.g., restricting free expression of everyone who opposes their talking points), the problem is not that they’re “too politically certain in what they believe.” The problem is that they’re too politically certain that what they believe is what everyone ought to believe. This seems like an important difference!
There’s also something else that I find weird about highlighting uncertainty as a solution to recklessness/fanaticism. Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution. (Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)
So, while I’m on board with cautioning against overconfidence and would probably concede that there’s often a link between overconfidence and unjustified moral or metaehtical confidence, I feel like it’s misguided in more than one way to highlight “moral certainty” as the thing that’s directly bad here.
In general (whether realist or anti-realist), there is “no clear link” between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it “seems only intuitive/natural” that an anti-realist should avoid being “too politically certain that what they believe is what everyone ought to believe.” I’m glad to hear that you’re naturally drawn to liberal tolerance. But many human beings evidently aren’t! It’s a notorious problem for anti-realism to explain how it doesn’t just end up rubber-stamping any values whatsoever, even authoritarian ones.
Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that “moral realism” is “problematic” here strikes me as completely confused. You’re implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/liberal contrast, not the realist/antirealist one. If you hold fixed people’s first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, or—in some conceivable cases, like an authoritarian AI—a lack of sufficient respect for the value of others’ autonomy. But the problem with someone who wrongly disregards others’ autonomy is not that they ought to be “morally uncertain”, but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. It’s of course unsurprising that having bad moral views would be problematic!)
I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).
I’m not convinced by what you said about the effects of belief in realism vs anti-realism.
If you hold fixed people’s first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
Sure, but that feels like it’s begging the question.
Let’s grant that the people we’re comparing already have liberal intuitions. After all, this discussion started in a context that I’d summarize as “What are ideological risks in EA-related settings, like the FTX/SBF setting?,” so, not a setting where authoritarian intuitions are common. Also, the context wasn’t “How would we reform people who start out with illiberal intuitions” – that would be a different topic.
With that out of the way, then, the relevant question strikes me as something like this:
Under which metaethical view (if any) – axiological realism vs axiological anti-realism – is there more of a temptation for axiologically certain individuals with liberal intuitions to re-think/discount these liberal intuitions so as to make the world better according to their axiology?
Here’s how I picture the axiological anti-realist’s internal monologue:
“The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There’s no tension here.”
By contrast, here’s how I picture the axiological realist:
“I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there’s a sense in which that will be better for them than if I didn’t do it. Perhaps this justifies going against the common-sense principles of liberalism, if I’m truly certain enough and am not self-deceiving here? So, I’m kind of torn...”
I’m not just speaking about hypotheticals. I think this is a dynamic that totally happens with some moral realists in the EA context. For instance, back when I was a moral realist negative utilitarian, I didn’t like that my moral beliefs put my goals in tension with most of the rest of the world, but I noticed that there was this tension. It feels like the tension disappeared when I realized that I have to agree to disagree with others about matters of axiology (as opposed to thinking, “I have to figure out whether I’m indeed correct about my high confidence, or whether I’m the one who’s wrong”).
Sure, maybe the axiological realist will come up with a for-them compelling argument why they shouldn’t impose the correct axiology on others. Or maybe their notion of “correct axiology” was always inherently about preference fulfillment, which you could say entails respecting autonomy by definition. (That said, if someone were also counting “making future flourishing people,” as “creating more preference fulfillment,” then this sort of axiology is at least in some possible tension with respecting the autonomy of present/existing people.) ((Also, this is just a terminological note, but I usually think of preference utilitarianism as a stance that isn’t typically “axiologically realist,” so I’d say any “axiological realism” faces the same issue with there being at least a bit of tension with belief in and and valuing autonomy in practice.))
When I talked about whether there’s a “clear link” between two beliefs, I didn’t mean that the link would be binding or inevitable. All I meant is that there’s some tension that one has to address somehow.
That was the gist of my point, and I feel like the things you said in reply were perhaps often correct but they went past the point I tried to convey. (Maybe part of what goes into this disagreement is that you might be strawmanning what I think of as “anti-realism” with “relativism”.)
Here’s how I picture the axiological anti-realist’s internal monologue:
“The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There’s no tension here.”
By contrast, here’s how I picture the axiological realist:
“I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there’s a sense in which that will be better for them than if I didn’t do it. Perhaps this justifies going against the common-sense principles of liberalism, if I’m truly certain enough and am not self-deceiving here? So, I’m kind of torn...”
Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:
(1) A possible anti-realist monologue: “I find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since I’m an anti-realist I don’t feel compelled to abide by norms constraining my pursuit of what I most care about.”
(2) A possible realist monologue: “The point of liberal norms is to prevent one person from imposing their beliefs on others. I’m confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, I’ll restrict myself to permissible means of promoting the good. There’s no tension here.”
The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpful—and rather bias-prone—distraction.
That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat.
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical “progress”, done wrong, being a bad thing.
This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism about human nature. Perhaps what holds most EAs back at this point is a lack of proviolence—a lack of willingness to use violent means/cause great harm to others; I think this can be roughly summed up as “not being highly callous/malevolent”.
I think it’s important to reduce extremes of Utopianism and vile world in EA, which I feel are concerningly abundant here. Perhaps it is impossble/undesirable to completely eliminate them. But what might be most important is something that seems fairly obvious: try to screen out people who are capable of willfully causing massive harm (i.e., callous/malevolent individuals).
Based on some research I’ve done, the distribution of malevolence is relatively highly right-skewed, so screening for malevolence probably affects the fewest individuals while still being highly effective. It also seems that callousness and a willingness to harm others for instrumental gain are associated with abnormalities in more primal regions of the brain (like the Amygdala) and are highly resistant to interventions. Therefore, changing the culture is very unlikely to robustly “align” them. And intuitively, a willingness to cause harm seems to be the most crucial component, while the other components seem to be more channeling malevolence towards a more fanatical bent.
Sorry I’m kind of just rambling and hoping something useful comes out of this.
I think too much moral certainty doesn’t necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory[1], but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I’m not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.
Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution.
This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we’d be able to better handle divergent values through compromise/cooperation.
(Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)
Yeah, there is perhaps a background disagreement between us, where I tend to think there’s little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.
I feel like it’s more relevant what a person actually believes than whether they think of themselves as uncertain. Moral certainty seems directly problematic (in terms of risks of recklessness and unilateral action) only when it comes together with moral realism: If you think you know the single correct moral theory, you’ll consider yourself justified to override other people’s moral beliefs and thwart the goals they’ve been working towards.
By contrast, there seems to me to be no clear link from “anti-realist moral certainty in some subjectivist axiology” to “considers themselves justified to override other people’s life goals.” On the contrary, unless someone has an anti-social personality to begin with, it seems only intuitive/natural to me to go from “anti-realism about morality is true” to “we should probably treat moral disagreements between morally certain individuals more like we’d ideally treat political disagreements.” How would we want to ideally treat political disagreements? I’d say we want to keep political polarization at a low, accept that there’ll be view differences, and we’ll agree to play fair and find positive-sum compromises. If some political faction goes around thinking it’s okay to sabotage others or use their power unfairly (e.g., restricting free expression of everyone who opposes their talking points), the problem is not that they’re “too politically certain in what they believe.” The problem is that they’re too politically certain that what they believe is what everyone ought to believe. This seems like an important difference!
There’s also something else that I find weird about highlighting uncertainty as a solution to recklessness/fanaticism. Uncertainty can transition to increased certainty later on, as people do more thinking. So, it doesn’t feel like a stable solution. (Not to mention that, as EAs tell themselves it’s virtuous to remain uncertain, this impedes philosophical progress at the level of individuals.)
So, while I’m on board with cautioning against overconfidence and would probably concede that there’s often a link between overconfidence and unjustified moral or metaehtical confidence, I feel like it’s misguided in more than one way to highlight “moral certainty” as the thing that’s directly bad here.
(You’re of course free to disagree.)
In general (whether realist or anti-realist), there is “no clear link” between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it “seems only intuitive/natural” that an anti-realist should avoid being “too politically certain that what they believe is what everyone ought to believe.” I’m glad to hear that you’re naturally drawn to liberal tolerance. But many human beings evidently aren’t! It’s a notorious problem for anti-realism to explain how it doesn’t just end up rubber-stamping any values whatsoever, even authoritarian ones.
Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that “moral realism” is “problematic” here strikes me as completely confused. You’re implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/liberal contrast, not the realist/antirealist one. If you hold fixed people’s first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
That said, I very much agree about the “weirdness” of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, or—in some conceivable cases, like an authoritarian AI—a lack of sufficient respect for the value of others’ autonomy. But the problem with someone who wrongly disregards others’ autonomy is not that they ought to be “morally uncertain”, but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. It’s of course unsurprising that having bad moral views would be problematic!)
I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).
I’m not convinced by what you said about the effects of belief in realism vs anti-realism.
Sure, but that feels like it’s begging the question.
Let’s grant that the people we’re comparing already have liberal intuitions. After all, this discussion started in a context that I’d summarize as “What are ideological risks in EA-related settings, like the FTX/SBF setting?,” so, not a setting where authoritarian intuitions are common. Also, the context wasn’t “How would we reform people who start out with illiberal intuitions” – that would be a different topic.
With that out of the way, then, the relevant question strikes me as something like this:
Under which metaethical view (if any) – axiological realism vs axiological anti-realism – is there more of a temptation for axiologically certain individuals with liberal intuitions to re-think/discount these liberal intuitions so as to make the world better according to their axiology?
Here’s how I picture the axiological anti-realist’s internal monologue:
“The point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. There’s no tension here.”
By contrast, here’s how I picture the axiological realist:
“I have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then there’s a sense in which that will be better for them than if I didn’t do it. Perhaps this justifies going against the common-sense principles of liberalism, if I’m truly certain enough and am not self-deceiving here? So, I’m kind of torn...”
I’m not just speaking about hypotheticals. I think this is a dynamic that totally happens with some moral realists in the EA context. For instance, back when I was a moral realist negative utilitarian, I didn’t like that my moral beliefs put my goals in tension with most of the rest of the world, but I noticed that there was this tension. It feels like the tension disappeared when I realized that I have to agree to disagree with others about matters of axiology (as opposed to thinking, “I have to figure out whether I’m indeed correct about my high confidence, or whether I’m the one who’s wrong”).
Sure, maybe the axiological realist will come up with a for-them compelling argument why they shouldn’t impose the correct axiology on others. Or maybe their notion of “correct axiology” was always inherently about preference fulfillment, which you could say entails respecting autonomy by definition. (That said, if someone were also counting “making future flourishing people,” as “creating more preference fulfillment,” then this sort of axiology is at least in some possible tension with respecting the autonomy of present/existing people.) ((Also, this is just a terminological note, but I usually think of preference utilitarianism as a stance that isn’t typically “axiologically realist,” so I’d say any “axiological realism” faces the same issue with there being at least a bit of tension with belief in and and valuing autonomy in practice.))
When I talked about whether there’s a “clear link” between two beliefs, I didn’t mean that the link would be binding or inevitable. All I meant is that there’s some tension that one has to address somehow.
That was the gist of my point, and I feel like the things you said in reply were perhaps often correct but they went past the point I tried to convey. (Maybe part of what goes into this disagreement is that you might be strawmanning what I think of as “anti-realism” with “relativism”.)
Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:
(1) A possible anti-realist monologue: “I find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since I’m an anti-realist I don’t feel compelled to abide by norms constraining my pursuit of what I most care about.”
(2) A possible realist monologue: “The point of liberal norms is to prevent one person from imposing their beliefs on others. I’m confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, I’ll restrict myself to permissible means of promoting the good. There’s no tension here.”
The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpful—and rather bias-prone—distraction.
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical “progress”, done wrong, being a bad thing.
This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism about human nature. Perhaps what holds most EAs back at this point is a lack of proviolence—a lack of willingness to use violent means/cause great harm to others; I think this can be roughly summed up as “not being highly callous/malevolent”.
I think it’s important to reduce extremes of Utopianism and vile world in EA, which I feel are concerningly abundant here. Perhaps it is impossble/undesirable to completely eliminate them. But what might be most important is something that seems fairly obvious: try to screen out people who are capable of willfully causing massive harm (i.e., callous/malevolent individuals).
Based on some research I’ve done, the distribution of malevolence is relatively highly right-skewed, so screening for malevolence probably affects the fewest individuals while still being highly effective. It also seems that callousness and a willingness to harm others for instrumental gain are associated with abnormalities in more primal regions of the brain (like the Amygdala) and are highly resistant to interventions. Therefore, changing the culture is very unlikely to robustly “align” them. And intuitively, a willingness to cause harm seems to be the most crucial component, while the other components seem to be more channeling malevolence towards a more fanatical bent.
Sorry I’m kind of just rambling and hoping something useful comes out of this.
I think too much moral certainty doesn’t necessarily cause someone to be dangerous by itself, and there has to be other elements to their personality or beliefs. For example lots of people are or were unreasonably certain about divine command theory[1], but only a minority of them caused much harm (e.g. by being involved in crusades and inquisitions). I’m not sure it has much to do with realism vs non-realism though. I can definitely imagine some anti-realist (e.g., one with strong negative utilitarian beliefs) causing a lot of damage if they were put in certain positions.
This seems like a fair point. I can think of some responses. Under realism (or if humans specifically tend to converge under reflection) people would tend to converge to similar values as they think more, so increased certainty should be less problematic. Under other metaethical alternatives, one might hope that as we mature overall in our philosophies and social systems, we’d be able to better handle divergent values through compromise/cooperation.
Yeah, there is perhaps a background disagreement between us, where I tend to think there’s little opportunity to make large amounts of genuine philosophical progress without doing much more cognitive work (i.e., to thoroughly explore the huge space of possible ideas/arguments/counterarguments), making your concern not significant for me in the near term.
Self-nitpick: divine command theory is actually a meta-ethical theory. I should have said “various religious moralities”.