I think the discussion under “An outside view on having strong views” would benefit from discussing how much normative ethics is analogous to science and how much it is anologous to something more like personal career choice (which weaves together personal interests but still has objective components where research can be done—see also my post on life goals).
FWIW, I broadly agree with your response to the objection/question, “I’m an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can’t I just endorse whatever views seem best to me?”
As forum readers probably know by now, I think anti-realism is obviously true, but I don’t mean the “anything goes” type of anti-realism, so I’m not unsympathetic to your overall takeaway.
Still, even though I agree with your response to the “anything goes” type of anti-realism, I think you’d ideally want to engage more with metaethical uncertainty and how moral reflection works if (the more structure-containing) moral anti-realism is true.
The main argument in that linked post goes as follows: Moral realism implies the existence of a speaker-independent moral reality. Being morally uncertain means having a vague or unclear understanding of that reality. So there’s a hidden tension: Without clearly apprehending the alleged moral reality, how can we be confident it exists?
In the post, I then discuss three possible responses for resolving that challenge and explain why I think those responses all fail.
What this means is that moral uncertainty almost by necessity (there’s a trivial exception where your confidence in moral realism is based on updating to someone’s else’s expertise but they have not yet told you the true object-level morality that they believe in) implies either metaethical uncertainty (uncertainty between moral realism and moral anti-realism) or confident moral anti-realism.
That post has been on the EA forum for 3 years and I’ve not gotten any pushback on it yet, but I’ve also not seen people start discussing moral uncertainty in a way that I don’t feel like sounds subtly off or question-begging in light of what I pointed out. Instead, I think one should ideally discuss how to reason under metaethical uncertainty or how to do moral reflection within confident moral anti-realism.
If anyone is interested, I spelled out how I think we would do that here:
It’s probably one of the two pieces of output I’m most proud of. My earlier posts in the anti-realism sequence covered ideas that I thought many people already understood, but this one let me contribute some new insights. (Joe Carlsmith has written similar stuff and writes and explains things better than I do— I mention some of his work in the post.)
If someone just wants to read the takeaways and not the underlying arguments for why I think those takeways apply, here they are:
Selected takeaways: good vs. bad reasons for deferring to (more) moral reflection
To list a few takeaways from this post, I made a list of good and bad reasons for deferring (more) to moral reflection. (Note, again, that deferring to moral reflection comes on a spectrum.)
In this context, it’s important to note that deferring to moral reflection would be wise if moral realism is true or if idealized values are “here for us to discover.” In this sequence, I argued that neither of those is true – but some (many?) readers may disagree.
Assuming that I’m right about the flavor of moral anti-realism I’ve advocated for in this sequence, below are my “good and bad reasons for deferring to moral reflection.”
(Note that this is not an exhaustive list, and it’s pretty subjective. Moral reflection feels more like an art than a science.)
Bad reasons for deferring strongly to moral reflection:
You haven’t contemplated the possibility that the feeling of “everything feels a bit arbitrary; I hope I’m not somehow doing moral reasoning the wrong way” may never go away unless you get into a habit of forming your own views. Therefore, you never practiced the steps that could lead to you forming convictions. Because you haven’t practiced those steps, you assume you’re far from understanding the option space well enough, which only reinforces your belief that it’s too early for you to form convictions.
You observe that other people’s fundamental intuitions about morality differ from yours. You consider that an argument for trusting your reasoning and your intuitions less than you otherwise would. As a result, you lack enough trust in your reasoning to form convictions early.
You have an unreflected belief that things don’t matter if moral anti-realism is true. You want to defer strongly to moral reflection because there’s a possibility that moral realism is true. However, you haven’t thought about the argument that naturalist moral realism and moral anti-realism use the same currency, i.e., that the moral views you’d adopt if moral anti-realism were true might matter just as much to you.
Good reasons for deferring strongly to moral reflection:
You don’t endorse any of the bad reasons, and you still feel drawn to deferring to moral reflection. For instance, you feel genuinely unsure how to reason about moral views or what to think about a specific debate (despite having tried to form opinions).
You think your present way of visualizing the moral option space is unlikely to be a sound basis for forming convictions. You suspect that it is likely to be highly incomplete or even misguided compared to how you’d frame your options after learning more science and philosophy inside an ideal reflection environment.
Bad reasons for forming some convictions early:
You think moral anti-realism means there’s no for-you-relevant sense in which you can be wrong about your values.
You think of yourself as a rational agent, and you believe rational agents must have well-specified “utility functions.” Hence, ending up with under-defined values (which is a possible side-effect of deferring strongly to moral reflection) seems irrational/unacceptable to you.
Good reasons for forming some convictions early:
You can’t help it, and you think you have a solid grasp of the moral option space (e.g., you’re likely to pass Ideological Turing tests of some prominent reasoners who conceptualize it differently).
You distrust your ability to guard yourself against unwanted opinion drift inside moral reflection procedures, and the views you already hold feel too important to expose to that risk.
Thanks for the detailed reply. You raise a number of different interesting points and I’m not going to touch on all of them, given a lack of time but there are a few I want to highlight.
the discussion under “An outside view on having strong views” would benefit from discussing how much normative ethics is analogous to science and how much it is anologous to something more like personal career choice (which weaves together personal interests but still has objective components where research can be done—see also my post on life goals).
While I can see how you might make this claim, I don’t really think ethics is very analogous to personal career choice. Analogies are always limited (more on this later) but I think this analogy probably implies too much “personal fit” in career choice which are often a lot about “well, what do you like to do?” so much as they are “this is what will happen if you do that?”. I think you’re largely making the case more for the former, with some part of the latter and for morality I might push for a different combination, even assuming a version of anti-realism. But perhaps all this breaks down on what you think of career choice, where I don’t have particularly strong takes.
I think anti-realism is obviously true, but I don’t mean the “anything goes” type of anti-realism, so I’m not unsympathetic to your overall takeaway.
Still, even though I agree with your response to the “anything goes” type of anti-realism, I think you’d ideally want to engage more with metaethical uncertainty and how moral reflection works if (the more structure-containing) moral anti-realism is true.
You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior. In one case, it’s largely because that’s how we’d come to the right answer but in other cases it would be because there’s no right answer to the matter and the only way to resolve disputes is through aggregating opinions across different people and belief systems.
That said, I believe–correct me if I’m wrong–your posts are arguing for a particularly narrow version of realism that is more constrained than typical and that there’s a tension between moral realism and moral uncertainty.
Stepping back a bit, I think a big thrust of my post is that you generally shouldn’t make statements like “anti-realism is obviously true” because the nature of evidence for that claim is pretty weak, even if the nature of the arguments for you reaching that conclusion were clear and are internally compelling to you. You’ve defined moral realism narrowly so perhaps this is neither here nor there but, as you may be aware, most English-speaking philosophers accept/lean towards moral realism despite you noting in this comment that many EAs who have been influential have been anti-realists (broadly defined). This isn’t compelling evidence, but it is evidence against the claim that anti-realism is “obviously correct” since you are at least implicitly claiming most philosophers are wrong about this issue.
What this means is that moral uncertainty almost by necessity (there’s a trivial exception where your confidence in moral realism is based on updating to someone’s else’s expertise but they have not yet told you the true object-level morality that they believe in) implies either metaethical uncertainty (uncertainty between moral realism and moral anti-realism) or confident moral anti-realism.
Still, I think the notion of “forever inaccessible moral facts” is incomprehensible, not just pointless. Perhaps(?) we can meaningfully talk about “unreachable facts of unknown nature,” but it seems strange to speak of unreachable facts of some known nature (such as “moral” nature). By claiming that a fact is of some known nature, aren’t we (implicitly) saying that we know of a way to tell why that fact belongs to the category? If so, this means that the fact is knowable, at least in theory, since it belongs to a category of facts whose truth-making properties we understand. If some fact were truly “forever unknowable,” it seems like it would have to be a fact of a nature we don’t understand. Whatever those forever unknowable facts may be, they couldn’t have anything to do with concepts we already understand, such as our “moral concepts” of the form (e.g.,) “Torturing innocent children is wrong.”
I could retort here that it seems totally reasonable to argue that there’s a fact of the matter about what caused the Big Bang or how life on Earth began. What caused these could conceivably be totally inaccessible to us now but still related to known facts. Nothing about not knowing how these things started commits us to say–what I take to be the equivalent in this context–that the true nature of those situations has nothing to do with concepts we understand like biology or physics. Further, given what we know now in these domains, I think it’s fair to rule out a wide range of potential causes of them and constrain things to a reasonable set of targets that it may have caused them.
The analogy here seems reasonable enough with morality to me that you shouldn’t rule this type of response out.
Similarly, you say the following to branch two of possible responses to your claim:
To summarize, the issue with self-evident moral statements like “Torturing innocent children is wrong” is that they don’t provide any evidence for a moral reality that covers disagreements in population ethics or accounts of well-being. To be confident moral realists, we’d need other ways of attaining moral knowledge and ascertaining the parts of the moral reality beyond self-evident statements. In other words, we can’t be confident moral realists about a far-reaching, non-trivial, not-immediately-self-evident moral reality unless we already have a clear sense of what it looks like.
I don’t fully buy this argument for similar reasons to the above. This seems more like an argument that to be confident moral realists who assert correct answers to most/all the important questions we need strong evidence of moral realism in most/all domains than it is an argument that we can’t be moral realists at all. One way I might take this (not saying you’d agree) would be to say you think moral realism that isn’t action guiding on the contentious points isn’t moral realism worth the name because all the value of the name is in the contentious points (and this may be particularly true in EA). But if that phrasing of the problem is acceptable, then we may be basically only arguing about the definition of “moral realism” and not anything practically relevant. Or, one could say we can’t be confident moral realists given the uncertainty about what morality entails in a great many cases and I might retort “we don’t need to be confident in order to choose among the plausible options so long as we can whittle things down to restricted set of choices and everything isn’t up for grabs.” This would be for basically the same reasons a huge number of potential options aren’t relevant for settling on the correct theory of abiogenesis or taking the right scientific actions given the set of plausible theories.
But perhaps a broader issue is I, unlike many other effective altruists, am actually cool with (in your words) “minimalist moral realism” being fine and using aggregation methods like those mentioned above to come to final takes about what to do given the uncertainty. This is quite different from confidently stating “the correct answer is this precise version of utilitarianism, and here’s what it says we need to do…”. I don’t think what I’m comfortable saying obviously qualifies as an insignificant moral realism relative to such a utilitarian even if the reasons for reaching the suggested actions differed.
But stepping back, this back and forth looks like another example of the move I criticized above because you are making some analogies and arguing some conclusion follows from those analogies, I’m denying those analogies, and therefore denying the conclusion, and making different analogies. Neither of us has the kind of definitive evidence on their side that prevails in science domains here.
So, how confident am I that you’re wrong? Not super confident. If the version of moral anti-realism you say is true and it results in something like your life-goals framework as the best way to decide ethical matters, then so be it. But the question is what to do given uncertainty that this is the correct approach, and that assuming it’s the correct approach we know what it recommends differs from how we’d otherwise behave. I don’t think it’s clear to me meta-ethical uncertainty about realism or anti-realism is a highly relevant factor in deciding what to do unless, again, someone is embracing a “anything goes” kind of anti-realism which neither of us are endorsing.
Thanks for engaging with my comment (and my writing more generally)!
You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior.
I agree that the bargaining you reference works well for resolving value uncertainty (or resolving value disagreements via compromise) even if anti-realism is true. Still, I want to flag that for individuals reflecting on their values, there are people who, due to factors like the nature and strength of their moral intuitions and their history of forming convictions related to their EA work, (etc.,) will want to do things differently and have fewer areas than your post would suggest where they remain fundamentally uncertain. Reading your post, there’s an implication that a person would be doing something imprudent if they didn’t consider themselves uncertain on contested EA issues, such as whether creating happy people is morally important. I’m trying to push back on that: Depending on the specifics, I think forming convictions on such matters can be fine/prudent.
Stepping back a bit, I think a big thrust of my post is that you generally shouldn’t make statements like “anti-realism is obviously true” because the nature of evidence for that claim is pretty weak, even if the nature of the arguments for you reaching that conclusion were clear and are internally compelling to you.
I’m with you regarding the part about evidence being comparatively weak/brittle. Elsewhere, you wrote:
But stepping back, this back and forth looks like another example of the move I criticized above because you are making some analogies and arguing some conclusion follows from those analogies, I’m denying those analogies, and therefore denying the conclusion, and making different analogies. Neither of us has the kind of definitive evidence on their side that prevails in science domains here.
Yeah, this does characterize philosophical discussions. At the same time, I’d say that’s partly the point behind anti-realism, so I don’t think we all have to stay uncertain on realism vs. anti-realism. I see anti-realism as the claim that we cannot do better than argument via analogies (or, as I would say, “Does this/that way of carving out the option space appeal to us/strike us as complete?”). For comparison, moral realism would then be the claim that there’s more to it, that the domain is closer/more analogous to the natural sciences. (No need to click the link for the context of continuing this discussion, but I elaborate on these points in my post on why realists and anti-realists disagree. In short, I discuss that famous duck-rabbit illusion picture as an example/analogy of how we can contextualize philosophical disagreements under anti-realism: Both the duck and the rabbit are part of the structure on the page and it’s up to us to decide which interpretation we want to discuss/focus on, which one we find appealing in various ways, which one we may choose to orient our lives around, etc.)
You’ve defined moral realism narrowly so perhaps this is neither here nor there but, as you may be aware, most English-speaking philosophers accept/lean towards moral realism despite you noting in this comment that many EAs who have been influential have been anti-realists (broadly defined). This isn’t compelling evidence, but it is evidence against the claim that anti-realism is “obviously correct” since you are at least implicitly claiming most philosophers are wrong about this issue.
(On the topic of definitions, I don’t think that the disagreements would go away if the surveys had used my preferred definitions, so I agree that expert disagreement constitutes something I should address. (Definitions not matching between philosophers isn’t just an issue with how I defined moral realism, BTW. I’d say that many philosophers’ definitions draw the line in different places, so it’s not like I did anything unusual.))
I should add that I’m not attached to defending a strong meaning of “obviously correct” – I just meant that I myself don’t have doubts (and I think I’m justified to view it that way). I understand that things don’t seem obvious to all professional philosophers.
But maybe I’m conceding too much here – just going by the survey results alone, the results would be compatible with all the philosophers thinking that the question is easy/obvious (they justhappen to disagree). :) (I don’t expect this to be the case for all philosophers, of course, but many of them, at least, will feel very confident in their views!) This highlights that it’s not straightforward to go from “experts disagree on this topic” to “it is inappropriate for anyone to confidently take sides.” Experts themselves are often confident, so, if your epistemology places a lot of weight on deferring to experts, there’s a tension of “If being confident is imprudent, how can you still regard these experts as experts?” (Considerations like that are part of why I’m skeptical about what some EAs have called “modest epistemology.”)
Anyway, “professional philosophers” may sound intimidating as an abstract class, but if we consider the particular individuals who this makes up, it’s less clear that all or even most of them warrant epistemic deference. Even the EAs who are into modest epistemology would probably feel quite taken aback by some of the things that people with credentials in philosophy sometimes say on philosophy topics that EAs have thought about a lot about and have formed confident views on, such as animal ethics, bioethics, EA ideas like earning to give, etc. So, I’d say we’re often comfortable to rule out individual professional philosophers from being our “intellectual peers” after they voice disqualifying bad takes. (Note that “intellectual peers” is here meant as a very high bar – much higher than “we should assume that we can learn something from them.” Instead, this is about, “Even if it looks to us like they’re wrong, we should update part of the way towards them anyway, solely out of respect to their good judgment and reasoning abilities.”) From that that (ruling out concrete instances of individual professional philosophers because we observe some of their shocking bad takes), it’s not much further to no longer considering the more abstract-feeling reference class of “professional philosophers” as sacred.
Where should we best turn to for experts on moral realism/anti-realism? I would say that EAs of all people have the most skin in the game – we orient our lives to the outcomes of our moral deliberations (much more so than the typical academic philosophers). Sure, there are exceptions in both camps:
Parfit said that his life’s work is in vain if he’s wrong about metaethics, and this is in line with his actions (he basically worked on super-long metaethics books and follow-up essays and discussion up to or close to the point where he died).
Many EAs seem to treat metaethics more as a fun discussion activity than something they are deeply invested in getting to the bottom of (at least that’s my takeaway from reading the recent post by Bentham’s Bulldog and the discussions that came from it, which annoyed me because of how much people were just trying to re-invent the wheel instead of engaging with or referencing canonical writings (in EA or outside of it).
FWIW, I don’t think metaethics is super important. It’s not completely unimportant, though, and I think EAs are often insufficiently ambitious about it being possible to “get to the bottom of it,” which I BTW find to be a counterproductive stance that is limiting for their intellectual development.
To get back to the point about where to find worthy experts, I think among EAs you’ll find the most people are super invested in being right about these things and thinking about them deeply, so I’d put much more stock on them than on the opinions of professional academics.
Looking at the opinion landscape within EA, I actually get the impression that anti-realism wins out (see the comment you already linked to further above), especially because of the ones who have sympathies for moral realism, this is often because of intuition-based wagers (where the person admits that things look as though moral realism is false but they also say they perceive anti-realism as pointless) and deferring towards professional philosophers – which all seem like indirect reasons for belief. There’s also a conspicuous absence of in-depth EA posts that directly defend realism. (Even the one “pro realism” post that comes to my mind that I quite liked – Ben Garfinkel’s Realism and Rationality – contained a passage like “I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious.”) By contrast, with writings that defend anti-realism, it’s been not just me.
I could retort here that it seems totally reasonable to argue that there’s a fact of the matter about what caused the Big Bang or how life on Earth began. What caused these could conceivably be totally inaccessible to us now but still related to known facts.
Just for ease of having the context nearby, in this passage you are replying to the following section of my post (which you also quoted):
Still, I think the notion of “forever inaccessible moral facts” is incomprehensible, not just pointless. Perhaps(?) we can meaningfully talk about “unreachable facts of unknown nature,” but it seems strange to speak of unreachable facts of some known nature (such as “moral” nature). By claiming that a fact is of some known nature, aren’t we (implicitly) saying that we know of a way to tell why that fact belongs to the category? If so, this means that the fact is knowable, at least in theory, since it belongs to a category of facts whose truth-making properties we understand. If some fact were truly “forever unknowable,” it seems like it would have to be a fact of a nature we don’t understand. Whatever those forever unknowable facts may be, they couldn’t have anything to do with concepts we already understand, such as our “moral concepts” of the form (e.g.,) “Torturing innocent children is wrong.”
Going by your reply, I think we were talking past each other. (Re-reading my passage, I unfortunately don’t find it very clear.) I agree that abiogenesis or what caused the big bang might be “totally inaccessible to us now but still related to known facts.” But I’d say it’s at least accessible to ideally-positioned and arbitrarily powerful observers. So, these things (abiogenesis, cause behind the big bang, if there was any) are related to known facts because we know the sorts of stories we’d have to tell in a science fiction book to convince readers that there are intelligent observers who justifiably come to believe specific things about these events. (E.g., perhaps aliens in space suits conduct experiments on Earth 3.8 billion years ago, or cosmologists in a different part of the multiverse, “one level above ours,” study how new universe bubbles get birthed.) By contrast, the point I meant to make is that the types of facts that proponents of the elusive type of irreducible normativity want to postulate are much more weird and, well, elusive. They aren’t just unreachable for practical purposes, they are unreachable in every possible sense, even in science fiction stories where we can make the intelligent observers arbitrarily well-positioned and powerful. (This is because the point behind irreducible normativity is that we might not be able to trust our faculties when it comes to moral facts. No matter how elaborate of a story we tell where intelligent observers develop confident takes on object-level morality, there is always the question “Are they correct?.) This setup renders these irreducibly normative facts pointless, though. If someone refuses to give any accounts of “what is it that makes moral claims true,” they have thereby created a “category of fact” that is, in virtue of how it was set up, completely disconnected from anything else.
One way I might take this (not saying you’d agree) would be to say you think moral realism that isn’t action guiding on the contentious points isn’t moral realism worth the name because all the value of the name is in the contentious points (and this may be particularly true in EA).
This is indeed how I’ve defined moral realism! :)
As I say in the tension post, I’m okay with “minimalist moral realism is true.” I don’t feel that minimalist moral realism deserves to be called “moral realism,” but this is just a semantic choice. (My reasoning is that it would lead to confusion because I’ve never heard anyone say something like, “Yeah moral realism is true, but population ethics looks under-defined to me and so multiple answers to it seem defensible.” In practice, people in EA who are moral realists often assume that there’s a correct moral theory that address all the contested domains including population ethics. (By contrast, in academia you’ll even find moral realists who are moral particularlists, meaning don’t even buy into the notion that we want to generalize moral principles across lots of situations, something that almost all EAs are interested in doing.)
But perhaps a broader issue is I, unlike many other effective altruists, am actually cool with (in your words) “minimalist moral realism” being fine and using aggregation methods like those mentioned above to come to final takes about what to do given the uncertainty.
Cool! The only thing I would add then is again my point about how, depending on the specifics, it can be prudent to be confident about one’s values even in areas where many others EAs disagree or feel fundamentally uncertain.
I suspect that some readers may find this counterintuitive because “If morality is under-defined, why form convictions on parts of it that are under-defined? Why not just use bargaining to get a compromise among all the different views that seem defensible?”
I think the discussion under “An outside view on having strong views” would benefit from discussing how much normative ethics is analogous to science and how much it is anologous to something more like personal career choice (which weaves together personal interests but still has objective components where research can be done—see also my post on life goals).
FWIW, I broadly agree with your response to the objection/question, “I’m an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can’t I just endorse whatever views seem best to me?”
As forum readers probably know by now, I think anti-realism is obviously true, but I don’t mean the “anything goes” type of anti-realism, so I’m not unsympathetic to your overall takeaway.
Still, even though I agree with your response to the “anything goes” type of anti-realism, I think you’d ideally want to engage more with metaethical uncertainty and how moral reflection works if (the more structure-containing) moral anti-realism is true.
I’ve argued previously that moral uncertainty and moral realism are in tension.
The main argument in that linked post goes as follows: Moral realism implies the existence of a speaker-independent moral reality. Being morally uncertain means having a vague or unclear understanding of that reality. So there’s a hidden tension: Without clearly apprehending the alleged moral reality, how can we be confident it exists?
In the post, I then discuss three possible responses for resolving that challenge and explain why I think those responses all fail.
What this means is that moral uncertainty almost by necessity (there’s a trivial exception where your confidence in moral realism is based on updating to someone’s else’s expertise but they have not yet told you the true object-level morality that they believe in) implies either metaethical uncertainty (uncertainty between moral realism and moral anti-realism) or confident moral anti-realism.
That post has been on the EA forum for 3 years and I’ve not gotten any pushback on it yet, but I’ve also not seen people start discussing moral uncertainty in a way that I don’t feel like sounds subtly off or question-begging in light of what I pointed out. Instead, I think one should ideally discuss how to reason under metaethical uncertainty or how to do moral reflection within confident moral anti-realism.
If anyone is interested, I spelled out how I think we would do that here:
The “Moral Uncertainty” Rabbit Hole, Fully Excavated
It’s probably one of the two pieces of output I’m most proud of. My earlier posts in the anti-realism sequence covered ideas that I thought many people already understood, but this one let me contribute some new insights. (Joe Carlsmith has written similar stuff and writes and explains things better than I do— I mention some of his work in the post.)
If someone just wants to read the takeaways and not the underlying arguments for why I think those takeways apply, here they are:
Selected takeaways: good vs. bad reasons for deferring to (more) moral reflection
Hey Lukas,
Thanks for the detailed reply. You raise a number of different interesting points and I’m not going to touch on all of them, given a lack of time but there are a few I want to highlight.
While I can see how you might make this claim, I don’t really think ethics is very analogous to personal career choice. Analogies are always limited (more on this later) but I think this analogy probably implies too much “personal fit” in career choice which are often a lot about “well, what do you like to do?” so much as they are “this is what will happen if you do that?”. I think you’re largely making the case more for the former, with some part of the latter and for morality I might push for a different combination, even assuming a version of anti-realism. But perhaps all this breaks down on what you think of career choice, where I don’t have particularly strong takes.
You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior. In one case, it’s largely because that’s how we’d come to the right answer but in other cases it would be because there’s no right answer to the matter and the only way to resolve disputes is through aggregating opinions across different people and belief systems.
That said, I believe–correct me if I’m wrong–your posts are arguing for a particularly narrow version of realism that is more constrained than typical and that there’s a tension between moral realism and moral uncertainty.
Stepping back a bit, I think a big thrust of my post is that you generally shouldn’t make statements like “anti-realism is obviously true” because the nature of evidence for that claim is pretty weak, even if the nature of the arguments for you reaching that conclusion were clear and are internally compelling to you. You’ve defined moral realism narrowly so perhaps this is neither here nor there but, as you may be aware, most English-speaking philosophers accept/lean towards moral realism despite you noting in this comment that many EAs who have been influential have been anti-realists (broadly defined). This isn’t compelling evidence, but it is evidence against the claim that anti-realism is “obviously correct” since you are at least implicitly claiming most philosophers are wrong about this issue.
I’ve read your post on moral uncertainty and moral realism being in tension (and the first post where you defined moral realism) and I’m not sold on the responses you provide to your challenge. Take this section:
I could retort here that it seems totally reasonable to argue that there’s a fact of the matter about what caused the Big Bang or how life on Earth began. What caused these could conceivably be totally inaccessible to us now but still related to known facts. Nothing about not knowing how these things started commits us to say–what I take to be the equivalent in this context–that the true nature of those situations has nothing to do with concepts we understand like biology or physics. Further, given what we know now in these domains, I think it’s fair to rule out a wide range of potential causes of them and constrain things to a reasonable set of targets that it may have caused them.
The analogy here seems reasonable enough with morality to me that you shouldn’t rule this type of response out.
Similarly, you say the following to branch two of possible responses to your claim:
I don’t fully buy this argument for similar reasons to the above. This seems more like an argument that to be confident moral realists who assert correct answers to most/all the important questions we need strong evidence of moral realism in most/all domains than it is an argument that we can’t be moral realists at all. One way I might take this (not saying you’d agree) would be to say you think moral realism that isn’t action guiding on the contentious points isn’t moral realism worth the name because all the value of the name is in the contentious points (and this may be particularly true in EA). But if that phrasing of the problem is acceptable, then we may be basically only arguing about the definition of “moral realism” and not anything practically relevant. Or, one could say we can’t be confident moral realists given the uncertainty about what morality entails in a great many cases and I might retort “we don’t need to be confident in order to choose among the plausible options so long as we can whittle things down to restricted set of choices and everything isn’t up for grabs.” This would be for basically the same reasons a huge number of potential options aren’t relevant for settling on the correct theory of abiogenesis or taking the right scientific actions given the set of plausible theories.
But perhaps a broader issue is I, unlike many other effective altruists, am actually cool with (in your words) “minimalist moral realism” being fine and using aggregation methods like those mentioned above to come to final takes about what to do given the uncertainty. This is quite different from confidently stating “the correct answer is this precise version of utilitarianism, and here’s what it says we need to do…”. I don’t think what I’m comfortable saying obviously qualifies as an insignificant moral realism relative to such a utilitarian even if the reasons for reaching the suggested actions differed.
But stepping back, this back and forth looks like another example of the move I criticized above because you are making some analogies and arguing some conclusion follows from those analogies, I’m denying those analogies, and therefore denying the conclusion, and making different analogies. Neither of us has the kind of definitive evidence on their side that prevails in science domains here.
So, how confident am I that you’re wrong? Not super confident. If the version of moral anti-realism you say is true and it results in something like your life-goals framework as the best way to decide ethical matters, then so be it. But the question is what to do given uncertainty that this is the correct approach, and that assuming it’s the correct approach we know what it recommends differs from how we’d otherwise behave. I don’t think it’s clear to me meta-ethical uncertainty about realism or anti-realism is a highly relevant factor in deciding what to do unless, again, someone is embracing a “anything goes” kind of anti-realism which neither of us are endorsing.
Thanks for engaging with my comment (and my writing more generally)!
I agree that the bargaining you reference works well for resolving value uncertainty (or resolving value disagreements via compromise) even if anti-realism is true. Still, I want to flag that for individuals reflecting on their values, there are people who, due to factors like the nature and strength of their moral intuitions and their history of forming convictions related to their EA work, (etc.,) will want to do things differently and have fewer areas than your post would suggest where they remain fundamentally uncertain. Reading your post, there’s an implication that a person would be doing something imprudent if they didn’t consider themselves uncertain on contested EA issues, such as whether creating happy people is morally important. I’m trying to push back on that: Depending on the specifics, I think forming convictions on such matters can be fine/prudent.
I’m with you regarding the part about evidence being comparatively weak/brittle. Elsewhere, you wrote:
Yeah, this does characterize philosophical discussions. At the same time, I’d say that’s partly the point behind anti-realism, so I don’t think we all have to stay uncertain on realism vs. anti-realism. I see anti-realism as the claim that we cannot do better than argument via analogies (or, as I would say, “Does this/that way of carving out the option space appeal to us/strike us as complete?”). For comparison, moral realism would then be the claim that there’s more to it, that the domain is closer/more analogous to the natural sciences. (No need to click the link for the context of continuing this discussion, but I elaborate on these points in my post on why realists and anti-realists disagree. In short, I discuss that famous duck-rabbit illusion picture as an example/analogy of how we can contextualize philosophical disagreements under anti-realism: Both the duck and the rabbit are part of the structure on the page and it’s up to us to decide which interpretation we want to discuss/focus on, which one we find appealing in various ways, which one we may choose to orient our lives around, etc.)
(On the topic of definitions, I don’t think that the disagreements would go away if the surveys had used my preferred definitions, so I agree that expert disagreement constitutes something I should address. (Definitions not matching between philosophers isn’t just an issue with how I defined moral realism, BTW. I’d say that many philosophers’ definitions draw the line in different places, so it’s not like I did anything unusual.))
I should add that I’m not attached to defending a strong meaning of “obviously correct” – I just meant that I myself don’t have doubts (and I think I’m justified to view it that way). I understand that things don’t seem obvious to all professional philosophers.
But maybe I’m conceding too much here – just going by the survey results alone, the results would be compatible with all the philosophers thinking that the question is easy/obvious (they justhappen to disagree). :) (I don’t expect this to be the case for all philosophers, of course, but many of them, at least, will feel very confident in their views!) This highlights that it’s not straightforward to go from “experts disagree on this topic” to “it is inappropriate for anyone to confidently take sides.” Experts themselves are often confident, so, if your epistemology places a lot of weight on deferring to experts, there’s a tension of “If being confident is imprudent, how can you still regard these experts as experts?” (Considerations like that are part of why I’m skeptical about what some EAs have called “modest epistemology.”)
Anyway, “professional philosophers” may sound intimidating as an abstract class, but if we consider the particular individuals who this makes up, it’s less clear that all or even most of them warrant epistemic deference. Even the EAs who are into modest epistemology would probably feel quite taken aback by some of the things that people with credentials in philosophy sometimes say on philosophy topics that EAs have thought about a lot about and have formed confident views on, such as animal ethics, bioethics, EA ideas like earning to give, etc. So, I’d say we’re often comfortable to rule out individual professional philosophers from being our “intellectual peers” after they voice disqualifying bad takes. (Note that “intellectual peers” is here meant as a very high bar – much higher than “we should assume that we can learn something from them.” Instead, this is about, “Even if it looks to us like they’re wrong, we should update part of the way towards them anyway, solely out of respect to their good judgment and reasoning abilities.”) From that that (ruling out concrete instances of individual professional philosophers because we observe some of their shocking bad takes), it’s not much further to no longer considering the more abstract-feeling reference class of “professional philosophers” as sacred.
Another (shorter) way to get rid of that sacredness intuition: something like 70% of philosophers of religion are theists.
Where should we best turn to for experts on moral realism/anti-realism? I would say that EAs of all people have the most skin in the game – we orient our lives to the outcomes of our moral deliberations (much more so than the typical academic philosophers). Sure, there are exceptions in both camps:
Parfit said that his life’s work is in vain if he’s wrong about metaethics, and this is in line with his actions (he basically worked on super-long metaethics books and follow-up essays and discussion up to or close to the point where he died).
Many EAs seem to treat metaethics more as a fun discussion activity than something they are deeply invested in getting to the bottom of (at least that’s my takeaway from reading the recent post by Bentham’s Bulldog and the discussions that came from it, which annoyed me because of how much people were just trying to re-invent the wheel instead of engaging with or referencing canonical writings (in EA or outside of it).
FWIW, I don’t think metaethics is super important. It’s not completely unimportant, though, and I think EAs are often insufficiently ambitious about it being possible to “get to the bottom of it,” which I BTW find to be a counterproductive stance that is limiting for their intellectual development.
To get back to the point about where to find worthy experts, I think among EAs you’ll find the most people are super invested in being right about these things and thinking about them deeply, so I’d put much more stock on them than on the opinions of professional academics.
Looking at the opinion landscape within EA, I actually get the impression that anti-realism wins out (see the comment you already linked to further above), especially because of the ones who have sympathies for moral realism, this is often because of intuition-based wagers (where the person admits that things look as though moral realism is false but they also say they perceive anti-realism as pointless) and deferring towards professional philosophers – which all seem like indirect reasons for belief. There’s also a conspicuous absence of in-depth EA posts that directly defend realism. (Even the one “pro realism” post that comes to my mind that I quite liked – Ben Garfinkel’s Realism and Rationality – contained a passage like “I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious.”) By contrast, with writings that defend anti-realism, it’s been not just me.
Just for ease of having the context nearby, in this passage you are replying to the following section of my post (which you also quoted):
Going by your reply, I think we were talking past each other. (Re-reading my passage, I unfortunately don’t find it very clear.) I agree that abiogenesis or what caused the big bang might be “totally inaccessible to us now but still related to known facts.” But I’d say it’s at least accessible to ideally-positioned and arbitrarily powerful observers. So, these things (abiogenesis, cause behind the big bang, if there was any) are related to known facts because we know the sorts of stories we’d have to tell in a science fiction book to convince readers that there are intelligent observers who justifiably come to believe specific things about these events. (E.g., perhaps aliens in space suits conduct experiments on Earth 3.8 billion years ago, or cosmologists in a different part of the multiverse, “one level above ours,” study how new universe bubbles get birthed.) By contrast, the point I meant to make is that the types of facts that proponents of the elusive type of irreducible normativity want to postulate are much more weird and, well, elusive. They aren’t just unreachable for practical purposes, they are unreachable in every possible sense, even in science fiction stories where we can make the intelligent observers arbitrarily well-positioned and powerful. (This is because the point behind irreducible normativity is that we might not be able to trust our faculties when it comes to moral facts. No matter how elaborate of a story we tell where intelligent observers develop confident takes on object-level morality, there is always the question “Are they correct?.) This setup renders these irreducibly normative facts pointless, though. If someone refuses to give any accounts of “what is it that makes moral claims true,” they have thereby created a “category of fact” that is, in virtue of how it was set up, completely disconnected from anything else.
(I don’t feel like I’m great at explain this so I’ll also mention that Joe Carlsmith wrote about the same themes in The Ignorance of Normative Realism Bot, The Despair of Normative Realism Bot, and Against the Normative Realist’s Wager.)
This is indeed how I’ve defined moral realism! :)
As I say in the tension post, I’m okay with “minimalist moral realism is true.” I don’t feel that minimalist moral realism deserves to be called “moral realism,” but this is just a semantic choice. (My reasoning is that it would lead to confusion because I’ve never heard anyone say something like, “Yeah moral realism is true, but population ethics looks under-defined to me and so multiple answers to it seem defensible.” In practice, people in EA who are moral realists often assume that there’s a correct moral theory that address all the contested domains including population ethics. (By contrast, in academia you’ll even find moral realists who are moral particularlists, meaning don’t even buy into the notion that we want to generalize moral principles across lots of situations, something that almost all EAs are interested in doing.)
Cool! The only thing I would add then is again my point about how, depending on the specifics, it can be prudent to be confident about one’s values even in areas where many others EAs disagree or feel fundamentally uncertain.
I suspect that some readers may find this counterintuitive because “If morality is under-defined, why form convictions on parts of it that are under-defined? Why not just use bargaining to get a compromise among all the different views that seem defensible?”
I wrote a short dialogue on exactly this question in the “Anticipating Objections (Dialogue)” section of my sequence’s last post.