In general (whether realist or anti-realist), there is āno clear linkā between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it āseems only intuitive/ānaturalā that an anti-realist should avoid being ātoo politically certain that what they believe is what everyone ought to believe.ā Iām glad to hear that youāre naturally drawn to liberal tolerance. But many human beings evidently arenāt! Itās a notorious problem for anti-realism to explain how it doesnāt just end up rubber-stamping any values whatsoever, even authoritarian ones.
Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that āmoral realismā is āproblematicā here strikes me as completely confused. Youāre implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/āliberal contrast, not the realist/āantirealist one. If you hold fixed peopleās first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
That said, I very much agree about the āweirdnessā of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, orāin some conceivable cases, like an authoritarian AIāa lack of sufficient respect for the value of othersā autonomy. But the problem with someone who wrongly disregards othersā autonomy is not that they ought to be āmorally uncertainā, but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. Itās of course unsurprising that having bad moral views would be problematic!)
I agree with what you say in the last paragraph, including the highlighting of autonomy/āplacing value on it (whether in a realist or anti-realist way).
Iām not convinced by what you said about the effects of belief in realism vs anti-realism.
If you hold fixed peopleās first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
Sure, but that feels like itās begging the question.
Letās grant that the people weāre comparing already have liberal intuitions. After all, this discussion started in a context that Iād summarize as āWhat are ideological risks in EA-related settings, like the FTX/āSBF setting?,ā so, not a setting where authoritarian intuitions are common. Also, the context wasnāt āHow would we reform people who start out with illiberal intuitionsā ā that would be a different topic.
With that out of the way, then, the relevant question strikes me as something like this:
Under which metaethical view (if any) ā axiological realism vs axiological anti-realism ā is there more of a temptation for axiologically certain individuals with liberal intuitions to re-think/ādiscount these liberal intuitions so as to make the world better according to their axiology?
Hereās how I picture the axiological anti-realistās internal monologue:
āThe point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. Thereās no tension here.ā
By contrast, hereās how I picture the axiological realist:
āI have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then thereās a sense in which that will be better for them than if I didnāt do it. Perhaps this justifies going against the common-sense principles of liberalism, if Iām truly certain enough and am not self-deceiving here? So, Iām kind of torn...ā
Iām not just speaking about hypotheticals. I think this is a dynamic that totally happens with some moral realists in the EA context. For instance, back when I was a moral realist negative utilitarian, I didnāt like that my moral beliefs put my goals in tension with most of the rest of the world, but I noticed that there was this tension. It feels like the tension disappeared when I realized that I have to agree to disagree with others about matters of axiology (as opposed to thinking, āI have to figure out whether Iām indeed correct about my high confidence, or whether Iām the one whoās wrongā).
Sure, maybe the axiological realist will come up with a for-them compelling argument why they shouldnāt impose the correct axiology on others. Or maybe their notion of ācorrect axiologyā was always inherently about preference fulfillment, which you could say entails respecting autonomy by definition. (That said, if someone were also counting āmaking future flourishing people,ā as ācreating more preference fulfillment,ā then this sort of axiology is at least in some possible tension with respecting the autonomy of present/āexisting people.) ((Also, this is just a terminological note, but I usually think of preference utilitarianism as a stance that isnāt typically āaxiologically realist,ā so Iād say any āaxiological realismā faces the same issue with there being at least a bit of tension with belief in and and valuing autonomy in practice.))
When I talked about whether thereās a āclear linkā between two beliefs, I didnāt mean that the link would be binding or inevitable. All I meant is that thereās some tension that one has to address somehow.
That was the gist of my point, and I feel like the things you said in reply were perhaps often correct but they went past the point I tried to convey. (Maybe part of what goes into this disagreement is that you might be strawmanning what I think of as āanti-realismā with ārelativismā.)
Hereās how I picture the axiological anti-realistās internal monologue:
āThe point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. Thereās no tension here.ā
By contrast, hereās how I picture the axiological realist:
āI have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then thereās a sense in which that will be better for them than if I didnāt do it. Perhaps this justifies going against the common-sense principles of liberalism, if Iām truly certain enough and am not self-deceiving here? So, Iām kind of torn...ā
Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:
(1) A possible anti-realist monologue: āI find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since Iām an anti-realist I donāt feel compelled to abide by norms constraining my pursuit of what I most care about.ā
(2) A possible realist monologue: āThe point of liberal norms is to prevent one person from imposing their beliefs on others. Iām confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, Iāll restrict myself to permissible means of promoting the good. Thereās no tension here.ā
The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpfulāand rather bias-proneādistraction.
That said, I very much agree about the āweirdnessā of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat.
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical āprogressā, done wrong, being a bad thing.
In general (whether realist or anti-realist), there is āno clear linkā between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it āseems only intuitive/ānaturalā that an anti-realist should avoid being ātoo politically certain that what they believe is what everyone ought to believe.ā Iām glad to hear that youāre naturally drawn to liberal tolerance. But many human beings evidently arenāt! Itās a notorious problem for anti-realism to explain how it doesnāt just end up rubber-stamping any values whatsoever, even authoritarian ones.
Moral realists can hold that liberal tolerance is objectively required as a practical norm, which seems more robustly constraining than just holding it as a personal preference. So the suggestion that āmoral realismā is āproblematicā here strikes me as completely confused. Youāre implicitly comparing a realist authoritarian with an anti-realist liberal, but all the work is being done by the authoritarian/āliberal contrast, not the realist/āantirealist one. If you hold fixed peopleās first-order views, not just about axiology but also about practical norms, then their metaethics makes no further difference.
That said, I very much agree about the āweirdnessā of turning to philosophical uncertainty as a solution. Surely philosophical progress (done right) is a good thing, not a moral threat. But I think that just reinforces my alternative response that empirical uncertainty vs overconfidence is the real issue here. (Either that, orāin some conceivable cases, like an authoritarian AIāa lack of sufficient respect for the value of othersā autonomy. But the problem with someone who wrongly disregards othersā autonomy is not that they ought to be āmorally uncertainā, but that they ought to positively recognize autonomy as a value. That is, they problematically lack sufficient confidence in the correct values. Itās of course unsurprising that having bad moral views would be problematic!)
I agree with what you say in the last paragraph, including the highlighting of autonomy/āplacing value on it (whether in a realist or anti-realist way).
Iām not convinced by what you said about the effects of belief in realism vs anti-realism.
Sure, but that feels like itās begging the question.
Letās grant that the people weāre comparing already have liberal intuitions. After all, this discussion started in a context that Iād summarize as āWhat are ideological risks in EA-related settings, like the FTX/āSBF setting?,ā so, not a setting where authoritarian intuitions are common. Also, the context wasnāt āHow would we reform people who start out with illiberal intuitionsā ā that would be a different topic.
With that out of the way, then, the relevant question strikes me as something like this:
Under which metaethical view (if any) ā axiological realism vs axiological anti-realism ā is there more of a temptation for axiologically certain individuals with liberal intuitions to re-think/ādiscount these liberal intuitions so as to make the world better according to their axiology?
Hereās how I picture the axiological anti-realistās internal monologue:
āThe point of liberal intuitions is to prevent one person from imposing their beliefs on others. I care about my axiological views, but, since I have these liberal intuitions, I do not feel compelled to impose my views on others. Thereās no tension here.ā
By contrast, hereās how I picture the axiological realist:
āI have these liberal intuitions that make me uncomfortable with the thought of imposing my views on others. At the same time, I know what the objectively correct axiology is, so, if I, consequentialist-style, do things that benefit others according to the objectively correct axiology, then thereās a sense in which that will be better for them than if I didnāt do it. Perhaps this justifies going against the common-sense principles of liberalism, if Iām truly certain enough and am not self-deceiving here? So, Iām kind of torn...ā
Iām not just speaking about hypotheticals. I think this is a dynamic that totally happens with some moral realists in the EA context. For instance, back when I was a moral realist negative utilitarian, I didnāt like that my moral beliefs put my goals in tension with most of the rest of the world, but I noticed that there was this tension. It feels like the tension disappeared when I realized that I have to agree to disagree with others about matters of axiology (as opposed to thinking, āI have to figure out whether Iām indeed correct about my high confidence, or whether Iām the one whoās wrongā).
Sure, maybe the axiological realist will come up with a for-them compelling argument why they shouldnāt impose the correct axiology on others. Or maybe their notion of ācorrect axiologyā was always inherently about preference fulfillment, which you could say entails respecting autonomy by definition. (That said, if someone were also counting āmaking future flourishing people,ā as ācreating more preference fulfillment,ā then this sort of axiology is at least in some possible tension with respecting the autonomy of present/āexisting people.) ((Also, this is just a terminological note, but I usually think of preference utilitarianism as a stance that isnāt typically āaxiologically realist,ā so Iād say any āaxiological realismā faces the same issue with there being at least a bit of tension with belief in and and valuing autonomy in practice.))
When I talked about whether thereās a āclear linkā between two beliefs, I didnāt mean that the link would be binding or inevitable. All I meant is that thereās some tension that one has to address somehow.
That was the gist of my point, and I feel like the things you said in reply were perhaps often correct but they went past the point I tried to convey. (Maybe part of what goes into this disagreement is that you might be strawmanning what I think of as āanti-realismā with ārelativismā.)
Right, this tendentious contrast is just what I was objecting to. I could just as easily spin the opposite picture:
(1) A possible anti-realist monologue: āI find myself with some liberal intuitions; I also have various axiological views. Upon reflection, I find that I care more about preventing suffering (etc.) than I do about abstract tolerance or respect for autonomy, and since Iām an anti-realist I donāt feel compelled to abide by norms constraining my pursuit of what I most care about.ā
(2) A possible realist monologue: āThe point of liberal norms is to prevent one person from imposing their beliefs on others. Iām confident about what the best outcomes would be, considered in abstraction from human choice and agency, but since it would be objectively wrong and objectionable to pursue these ends via oppressive or otherwise illicit means, Iāll restrict myself to permissible means of promoting the good. Thereās no tension here.ā
The crucial question is just what practical norms one accepts (liberal or otherwise). Proposing correlations between other views and bad practical norms strikes me as an unhelpfulāand rather bias-proneādistraction.
I of course also think that philosophical progress, done right, is a good thing. However I also think genuine philosophical progress is much harder than it looks (see Some Thoughts on Metaphilosophy for some relevant background views), and therefore am perhaps more worried than most about philosophical āprogressā, done wrong, being a bad thing.