I found this post quite interesting, and very readable despite covering a complex and murky topic.
I’m also probably precisely the sort of person it’s aimed at: I currently have a very high credence in (non-naturalistic) moral realism being false, and don’t really know what it’d mean for it to be true. Yet I largely act as though it’s true, out of a sense that the “stakes are massively higher” if it’s true than if it’s false. (This is a description of my current mindset/behaviour, rather than something I claim is justified.)
I think this post updated me slightly towards less confidence in that “wager”, and slightly more openness to acting as though moral realism is false. But the update was perhaps surprisingly small. I’ll try to explain in this and other comments why I think that was the case. (Caveat that these comments might be driven by motivated reasoning and might tend towards nit-picking, as this is a post I’m predisposed to disagree with.)
Perhaps the key thing is that the post outlines the implications of having such strong metaethical fanaticism* that one would continue to say what Bob’s saying even as a superintelligent AI says what this AI is saying. But I haven’t had a superintelligent AI say such things to me. And I don’t think my current level of metaethical fanaticism (or something similar) commits me to behave as Bob does even if I got the substantial new evidence Bob gets in this scenario.
For example, I could perhaps think things “matter a million times more” if moral realism is true than if not, rather than infinitely more. Or I could perhaps think things matter infinitely more if realism is true, but also think I should reject Pascalian wagers when probabilities fall below 0.01%. If that’s the nature of my fanaticism, my wager might make sense now, but not as I receive arbitrarily large amounts of evidence favouring antirealism.
This is not a flaw with this post if you mean “metaethical fanaticism” to only refer to a particularly strong/extreme version of that sort of thing. And from the “Context” section, it seems that may be your intention. But I think this would mean that “metaethical fanaticism” wouldn’t cover all people for whom a Pascalian wager favouring moral realism may currently make sense, and thus that this dialogue doesn’t directly highlight flaws with all such wagers. And either this post or your last post gave me the impression that this post would be meant as a critique of this sort of wager more generally (but maybe that was just me).
And I think this matters, because I think (though I’m very open to push-back on this) that it means that someone like me could reasonably lean towards the following high-level policy:
Humanity should try to “keep our options open” for a while (by avoiding existential risks), while also improving our ability to understand, reflect, etc. so that we get into a better position to work out what options we should take.
And then maybe, after a few decades or centuries, we’ll come to realise moral realism is true, or we’ll at least get a good enough idea of what sort of thing we’re talking about or what we’re after that we can productively pursue it (in ways that don’t just boil down to doing the “self-evidently” good things that many anti-realists would’ve opted for anyway).
Or maybe we come to realise that this wager/fanaticism is misguided, or that moral realism is really, really close to certainly false, or that we’ll almost certainly never get any clue about what moral realism would say we should do (apart from things that are “self-evidently” good by anti-realist lights anyway). And if that happens, we then act as anti-realists, having poorly used “only” a few decades or centuries in the meantime.
I don’t fully trust my thinking on these matters, and I’d be quite interested to hear counterpoints. But I guess, at the least, this comment might serve as an insight into what someone operating with something like metaethical fanaticism might think in reaction to this post.
*The term “metaethical fanaticism” term feels slightly pejorative, and I considered using scare quotes around it. But it also feels somewhat reasonable, and the term “fanaticism” is used for a similar purpose in Christian Tarsney’s thesis. So I ended up deciding I was ok with accepting that label for myself without quote marks.
Thanks for those thoughts, and for the engagement in general! I just want to flag that I agree that weaker versions of the wager aren’t covered with my objections (I also say this in endnote 5 of my previous post). Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if they’re directed toward naturalist or naturalism-like versions of moral realism).
I think it’s important to note that anti-realism is totally compatible with this part you write here:
Humanity should try to “keep our options open” for a while (by avoiding existential risks), while also improving our ability to understand, reflect, etc. so that we get into a better position to work out what options we should take.
I know that you wrote this part because you’d primarily want to use the moral reflection to figure out if realism is true or not. But even if one were confident that moral realism is false, there remain some strong arguments to favor reflection. (It’s just that those arguments feel like less of a forced move, and the are interesting counter-considerations to also think about.)
(Also, whether one is a moral realist or not, it’s important to note that working toward a position of option value for philosophical reflection isn’t the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)
Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if they’re directed toward naturalist or naturalism-like versions of moral realism).
[...] even if one were confident that moral realism is false, there remain some strong arguments to favor reflection.
I think these are quite important points. I would like more people to favour more reflection in general and a Long Reflection in particular, including anti-realists. And I think if I became convinced that I should act as though anti-realism is true, I would still favour more reflection and a Long Reflection.
But I think I see two differences on this front between (a) people who are only somewhat confident in anti-realism, or very confident but accept a wager favouring realism, vs (b) people who are very confident in anti-realism and reject a wager favouring realism. (I think I’m in the second part of category (a) and you’re in category (b).)
(Epistemic status: I expect there’s more work on these questions than I’ve read, so I’d be interested in counterpoints or links.)
First, it seems that people in category (a) almost definitely should value reflection and a Long Reflection, given only the conditions that they can’t be very certain of a fully fleshed out first-order moral theory and that they have a notable credence that things more than decades or centuries from now matter a notable amount. (Though I’m not sure precisely what level of credence or “mattering” is required, and it might depend on things like how to deal with Pascalian situations over first-order moral theories.)
Meanwhile, it seems that people in category (b) should value reflection and a Long Reflection if their values favour that, which maybe most but not all people’s values do. So perhaps there are “strong arguments” to favour reflection even under anti-realism, and those arguments are stronger and applicable to a wider set of values than many people realise, but the arguments won’t hold for everyone?
Second, it seems that people in category (b) would likely devote less of their reflection/Long Reflection to thinking about things relevant to moral realism vs anti-realism or the implications moral realism might have, and more attention to the implications anti-realism might have. This is probably good if those people’s mindset is more reasonable than that of people in category (a), but less good if it isn’t. So it seems a meaningful difference worth being aware of.
(Also, whether one is a moral realist or not, it’s important to note that working toward a position of option value for philosophical reflection isn’t the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)
Yes, I think it makes sense to temper longtermism somewhat on these grounds, as well as on grounds of reducing astronomical waste. I still lean quite longtermist, but also value near-termist interventions on these grounds. And I might opt for things like terminating the Long Reflection after a few centuries even if a few additional millennia of reflecting would make us slightly more certain about what to do, and even if longtermism alone would say I should take that deal.
Ah, re-reading endnote 5 from your prior post, I see more clearly that you mean “metaethical fanaticism” as just a quite strong stance that favours moral realism absolutely, which also makes this post’s argument clearer. You also give a description that indicates the same thing here: “I coined the term metaethical fanaticism to refer to the stance of locking in the pursuit of irreducible normativity as a life goal.”
Maybe including a similar endnote here, or even in the main text, would’ve helped me. I’d read it in the last post, but then this post gave me the impression that it was arguing against even “weaker wagers”, which favour moral realism by some large rather than infinite amount. For example, the sentences preceding “I coined the term...” are:
Instead, I wrote this dialogue to call into question that even if things increasingly started to look as though irreducible normativity were false, we should still act as though it applies. In my previous post “#4: Why the Moral Realism Wager Fails,” I voiced skepticism about a general wager in favor of pursuing irreducible normativity. Still, I conceded that such a wager could apply in the case of certain individuals.
That last sentence being just before the description of “metaethical fanaticism” seems to suggest that all individuals for whom such a wager applies are metaethical fanatics. I think I’m one such individual, and that my version of “fanaticism” is more moderate.
Also, the first sentence there at least sounds to me like it could mean “even if things came to look more like irreducible normativity were false than they currently do”, rather than “however much things started to look like as though irreducible normativity were false” (i.e., even if we became arbitrarily certain of that).
(Again, this may be nit-picking driven by motivated reasoning or defensiveness or something.)
I found this post quite interesting, and very readable despite covering a complex and murky topic.
I’m also probably precisely the sort of person it’s aimed at: I currently have a very high credence in (non-naturalistic) moral realism being false, and don’t really know what it’d mean for it to be true. Yet I largely act as though it’s true, out of a sense that the “stakes are massively higher” if it’s true than if it’s false. (This is a description of my current mindset/behaviour, rather than something I claim is justified.)
I think this post updated me slightly towards less confidence in that “wager”, and slightly more openness to acting as though moral realism is false. But the update was perhaps surprisingly small. I’ll try to explain in this and other comments why I think that was the case. (Caveat that these comments might be driven by motivated reasoning and might tend towards nit-picking, as this is a post I’m predisposed to disagree with.)
Perhaps the key thing is that the post outlines the implications of having such strong metaethical fanaticism* that one would continue to say what Bob’s saying even as a superintelligent AI says what this AI is saying. But I haven’t had a superintelligent AI say such things to me. And I don’t think my current level of metaethical fanaticism (or something similar) commits me to behave as Bob does even if I got the substantial new evidence Bob gets in this scenario.
For example, I could perhaps think things “matter a million times more” if moral realism is true than if not, rather than infinitely more. Or I could perhaps think things matter infinitely more if realism is true, but also think I should reject Pascalian wagers when probabilities fall below 0.01%. If that’s the nature of my fanaticism, my wager might make sense now, but not as I receive arbitrarily large amounts of evidence favouring antirealism.
This is not a flaw with this post if you mean “metaethical fanaticism” to only refer to a particularly strong/extreme version of that sort of thing. And from the “Context” section, it seems that may be your intention. But I think this would mean that “metaethical fanaticism” wouldn’t cover all people for whom a Pascalian wager favouring moral realism may currently make sense, and thus that this dialogue doesn’t directly highlight flaws with all such wagers. And either this post or your last post gave me the impression that this post would be meant as a critique of this sort of wager more generally (but maybe that was just me).
And I think this matters, because I think (though I’m very open to push-back on this) that it means that someone like me could reasonably lean towards the following high-level policy:
Humanity should try to “keep our options open” for a while (by avoiding existential risks), while also improving our ability to understand, reflect, etc. so that we get into a better position to work out what options we should take.
And then maybe, after a few decades or centuries, we’ll come to realise moral realism is true, or we’ll at least get a good enough idea of what sort of thing we’re talking about or what we’re after that we can productively pursue it (in ways that don’t just boil down to doing the “self-evidently” good things that many anti-realists would’ve opted for anyway).
Or maybe we come to realise that this wager/fanaticism is misguided, or that moral realism is really, really close to certainly false, or that we’ll almost certainly never get any clue about what moral realism would say we should do (apart from things that are “self-evidently” good by anti-realist lights anyway). And if that happens, we then act as anti-realists, having poorly used “only” a few decades or centuries in the meantime.
I don’t fully trust my thinking on these matters, and I’d be quite interested to hear counterpoints. But I guess, at the least, this comment might serve as an insight into what someone operating with something like metaethical fanaticism might think in reaction to this post.
*The term “metaethical fanaticism” term feels slightly pejorative, and I considered using scare quotes around it. But it also feels somewhat reasonable, and the term “fanaticism” is used for a similar purpose in Christian Tarsney’s thesis. So I ended up deciding I was ok with accepting that label for myself without quote marks.
Thanks for those thoughts, and for the engagement in general! I just want to flag that I agree that weaker versions of the wager aren’t covered with my objections (I also say this in endnote 5 of my previous post). Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if they’re directed toward naturalist or naturalism-like versions of moral realism).
I think it’s important to note that anti-realism is totally compatible with this part you write here:
I know that you wrote this part because you’d primarily want to use the moral reflection to figure out if realism is true or not. But even if one were confident that moral realism is false, there remain some strong arguments to favor reflection. (It’s just that those arguments feel like less of a forced move, and the are interesting counter-considerations to also think about.)
(Also, whether one is a moral realist or not, it’s important to note that working toward a position of option value for philosophical reflection isn’t the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)
I think these are quite important points. I would like more people to favour more reflection in general and a Long Reflection in particular, including anti-realists. And I think if I became convinced that I should act as though anti-realism is true, I would still favour more reflection and a Long Reflection.
But I think I see two differences on this front between (a) people who are only somewhat confident in anti-realism, or very confident but accept a wager favouring realism, vs (b) people who are very confident in anti-realism and reject a wager favouring realism. (I think I’m in the second part of category (a) and you’re in category (b).)
(Epistemic status: I expect there’s more work on these questions than I’ve read, so I’d be interested in counterpoints or links.)
First, it seems that people in category (a) almost definitely should value reflection and a Long Reflection, given only the conditions that they can’t be very certain of a fully fleshed out first-order moral theory and that they have a notable credence that things more than decades or centuries from now matter a notable amount. (Though I’m not sure precisely what level of credence or “mattering” is required, and it might depend on things like how to deal with Pascalian situations over first-order moral theories.)
Meanwhile, it seems that people in category (b) should value reflection and a Long Reflection if their values favour that, which maybe most but not all people’s values do. So perhaps there are “strong arguments” to favour reflection even under anti-realism, and those arguments are stronger and applicable to a wider set of values than many people realise, but the arguments won’t hold for everyone?
Second, it seems that people in category (b) would likely devote less of their reflection/Long Reflection to thinking about things relevant to moral realism vs anti-realism or the implications moral realism might have, and more attention to the implications anti-realism might have. This is probably good if those people’s mindset is more reasonable than that of people in category (a), but less good if it isn’t. So it seems a meaningful difference worth being aware of.
Yes, I think it makes sense to temper longtermism somewhat on these grounds, as well as on grounds of reducing astronomical waste. I still lean quite longtermist, but also value near-termist interventions on these grounds. And I might opt for things like terminating the Long Reflection after a few centuries even if a few additional millennia of reflecting would make us slightly more certain about what to do, and even if longtermism alone would say I should take that deal.
Ah, re-reading endnote 5 from your prior post, I see more clearly that you mean “metaethical fanaticism” as just a quite strong stance that favours moral realism absolutely, which also makes this post’s argument clearer. You also give a description that indicates the same thing here: “I coined the term metaethical fanaticism to refer to the stance of locking in the pursuit of irreducible normativity as a life goal.”
Maybe including a similar endnote here, or even in the main text, would’ve helped me. I’d read it in the last post, but then this post gave me the impression that it was arguing against even “weaker wagers”, which favour moral realism by some large rather than infinite amount. For example, the sentences preceding “I coined the term...” are:
That last sentence being just before the description of “metaethical fanaticism” seems to suggest that all individuals for whom such a wager applies are metaethical fanatics. I think I’m one such individual, and that my version of “fanaticism” is more moderate.
Also, the first sentence there at least sounds to me like it could mean “even if things came to look more like irreducible normativity were false than they currently do”, rather than “however much things started to look like as though irreducible normativity were false” (i.e., even if we became arbitrarily certain of that).
(Again, this may be nit-picking driven by motivated reasoning or defensiveness or something.)