Weaker wagers are also similar to the way valuing reflection works for anti-realists (esp. if theyāre directed toward naturalist or naturalism-like versions of moral realism).
[...] even if one were confident that moral realism is false, there remain some strong arguments to favor reflection.
I think these are quite important points. I would like more people to favour more reflection in general and a Long Reflection in particular, including anti-realists. And I think if I became convinced that I should act as though anti-realism is true, I would still favour more reflection and a Long Reflection.
But I think I see two differences on this front between (a) people who are only somewhat confident in anti-realism, or very confident but accept a wager favouring realism, vs (b) people who are very confident in anti-realism and reject a wager favouring realism. (I think Iām in the second part of category (a) and youāre in category (b).)
(Epistemic status: I expect thereās more work on these questions than Iāve read, so Iād be interested in counterpoints or links.)
First, it seems that people in category (a) almost definitely should value reflection and a Long Reflection, given only the conditions that they canāt be very certain of a fully fleshed out first-order moral theory and that they have a notable credence that things more than decades or centuries from now matter a notable amount. (Though Iām not sure precisely what level of credence or āmatteringā is required, and it might depend on things like how to deal with Pascalian situations over first-order moral theories.)
Meanwhile, it seems that people in category (b) should value reflection and a Long Reflection if their values favour that, which maybe most but not all peopleās values do. So perhaps there are āstrong argumentsā to favour reflection even under anti-realism, and those arguments are stronger and applicable to a wider set of values than many people realise, but the arguments wonāt hold for everyone?
Second, it seems that people in category (b) would likely devote less of their reflection/āLong Reflection to thinking about things relevant to moral realism vs anti-realism or the implications moral realism might have, and more attention to the implications anti-realism might have. This is probably good if those peopleās mindset is more reasonable than that of people in category (a), but less good if it isnāt. So it seems a meaningful difference worth being aware of.
(Also, whether one is a moral realist or not, itās important to note that working toward a position of option value for philosophical reflection isnāt the only important thing to do according to all potentially plausible moral views. For some moral views, the most important time to create value arguably happens before long reflection.)
Yes, I think it makes sense to temper longtermism somewhat on these grounds, as well as on grounds of reducing astronomical waste. I still lean quite longtermist, but also value near-termist interventions on these grounds. And I might opt for things like terminating the Long Reflection after a few centuries even if a few additional millennia of reflecting would make us slightly more certain about what to do, and even if longtermism alone would say I should take that deal.
I think these are quite important points. I would like more people to favour more reflection in general and a Long Reflection in particular, including anti-realists. And I think if I became convinced that I should act as though anti-realism is true, I would still favour more reflection and a Long Reflection.
But I think I see two differences on this front between (a) people who are only somewhat confident in anti-realism, or very confident but accept a wager favouring realism, vs (b) people who are very confident in anti-realism and reject a wager favouring realism. (I think Iām in the second part of category (a) and youāre in category (b).)
(Epistemic status: I expect thereās more work on these questions than Iāve read, so Iād be interested in counterpoints or links.)
First, it seems that people in category (a) almost definitely should value reflection and a Long Reflection, given only the conditions that they canāt be very certain of a fully fleshed out first-order moral theory and that they have a notable credence that things more than decades or centuries from now matter a notable amount. (Though Iām not sure precisely what level of credence or āmatteringā is required, and it might depend on things like how to deal with Pascalian situations over first-order moral theories.)
Meanwhile, it seems that people in category (b) should value reflection and a Long Reflection if their values favour that, which maybe most but not all peopleās values do. So perhaps there are āstrong argumentsā to favour reflection even under anti-realism, and those arguments are stronger and applicable to a wider set of values than many people realise, but the arguments wonāt hold for everyone?
Second, it seems that people in category (b) would likely devote less of their reflection/āLong Reflection to thinking about things relevant to moral realism vs anti-realism or the implications moral realism might have, and more attention to the implications anti-realism might have. This is probably good if those peopleās mindset is more reasonable than that of people in category (a), but less good if it isnāt. So it seems a meaningful difference worth being aware of.
Yes, I think it makes sense to temper longtermism somewhat on these grounds, as well as on grounds of reducing astronomical waste. I still lean quite longtermist, but also value near-termist interventions on these grounds. And I might opt for things like terminating the Long Reflection after a few centuries even if a few additional millennia of reflecting would make us slightly more certain about what to do, and even if longtermism alone would say I should take that deal.