In that post, I describe examples of “reflection environments” that define ideal reasoning conditions (to specify one’s “idealized values”). I talk about pitfalls of reflection environments and judgment calls we’d have to make within that environment. (Pitfalls being things that are bad if they happen but could be avoided at least in theory. Judgment calls are things that aren’t bad per se but seem to introduce path dependencies that we can’t avoid, which may reduce the chance of convergent outcomes.)
I talk about “reflection strategies,” which describe how someone goes about their moral reflection inside a reflection environment. I distinguish between conservative and open-minded reflection strategies. They differ primarily on whether someone has already formed convictions (it’s a gradual difference). I describe how open-minded reflection strategies come at some risk of leading to under-defined outcomes. (I argue that this isn’t necessarily problem, but it’s something people want to be aware of.)
Here’s a section from somewhere in the middle of the post that summarizes some conclusions:
Conclusion: “One has to actively create oneself”
“Moral reflection” sounds straightforward – naively, one might think that the right path of reflection will somehow reveal itself. However, as we think of the complexities of setting up a suitable reflection environment and how we’d proceed inside it, what it would be like and how many judgment calls we’d have to make, we see that things can get tricky.
Joe Carlsmith summarized it as follows in an excellent post (what Carlsmith calls “idealizing subjectivism” corresponds to what I call “deferring to moral reflection”):
>My current overall take is that especially absent certain strong empirical >assumptions, idealizing subjectivism is ill-suited to the role some hope it can >play: namely, providing a privileged and authoritative (even if subjective) >standard of value. Rather, the version of the view I favor mostly reduces to the >following (mundane) observations:
If you already value X, it’s possible to make instrumental mistakes relative to X.
You can choose to treat the outputs of various processes, and the attitudes of various hypothetical beings, as authoritative to different degrees.
>This isn’t necessarily a problem. To me, though, it speaks against treating your >“idealized values” the way a robust meta-ethical realist treats the “true values.” >That is, you cannot forever aim to approximate the self you “would become”; you >must actively create yourself, often in the here and now. Just as the world can’t >tell you what to value, neither can your various hypothetical selves — unless you >choose to let them. Ultimately, it’s on you.
In my words, the difficulty with deferring to moral reflection too much is that the benefits of reflection procedures (having more information and more time to think; having access to augmented selves, etc.) don’t change what it feels like, fundamentally, to contemplate what to value. For all we know, many people would continue to feel apprehensive about doing their moral reasoning “the wrong way” since they’d have to make judgment calls left and right. Plausibly, no “correct answers” would suddenly appear to us. To avoid leaving our views under-defined, we have to – at some point – form convictions by committing to certain principles or ways of reasoning. As Carslmith describes it, one has to – at some point – “actively create oneself.” (The alternative is to accept the possibility that one’s reflection outcome may be under-defined.)
It is possible to delay the moment of “actively creating oneself” to a time within the reflection procedure. (This would correspond to an open-minded reflection strategy; there are strong arguments to keep one’s reflection strategy at least moderately open-minded.) However, note that, in doing so, one “actively creates oneself” as someone who trusts the reflection procedure more than one’s object-level moral intuitions or reasoning principles. This may be true for some people, but it isn’t true for everyone. Alternatively, it could be true for someone in some domains but not others.
Overall, I think Holden’s notion of future-proof values is intelligible and holds up to deeper analysis, but I’d imagine that a lot of people underestimate the degree to which it’s useful to already form convictions on some ways of reasoning or some components of one’s values, to avoid that the reflection outcome becomes under-defined to a degree we might find unsatisfying.
My post The Moral Uncertainty Rabbit Hole, Fully Excavated seems relevant to the discussion here.
In that post, I describe examples of “reflection environments” that define ideal reasoning conditions (to specify one’s “idealized values”). I talk about pitfalls of reflection environments and judgment calls we’d have to make within that environment. (Pitfalls being things that are bad if they happen but could be avoided at least in theory. Judgment calls are things that aren’t bad per se but seem to introduce path dependencies that we can’t avoid, which may reduce the chance of convergent outcomes.)
I talk about “reflection strategies,” which describe how someone goes about their moral reflection inside a reflection environment. I distinguish between conservative and open-minded reflection strategies. They differ primarily on whether someone has already formed convictions (it’s a gradual difference). I describe how open-minded reflection strategies come at some risk of leading to under-defined outcomes. (I argue that this isn’t necessarily problem, but it’s something people want to be aware of.)
Here’s a section from somewhere in the middle of the post that summarizes some conclusions:
Overall, I think Holden’s notion of future-proof values is intelligible and holds up to deeper analysis, but I’d imagine that a lot of people underestimate the degree to which it’s useful to already form convictions on some ways of reasoning or some components of one’s values, to avoid that the reflection outcome becomes under-defined to a degree we might find unsatisfying.