Thanks for engaging with my comment (and my writing more generally)!
You’re right I haven’t engaged here about what normative uncertainty means in that circumstance but I think, practically, it may look a lot like the type of bargaining and aggregation referenced in this post (and outlined elsewhere), just with a different reason for why people are engaged in that behavior.
I agree that the bargaining you reference works well for resolving value uncertainty (or resolving value disagreements via compromise) even if anti-realism is true. Still, I want to flag that for individuals reflecting on their values, there are people who, due to factors like the nature and strength of their moral intuitions and their history of forming convictions related to their EA work, (etc.,) will want to do things differently and have fewer areas than your post would suggest where they remain fundamentally uncertain. Reading your post, there’s an implication that a person would be doing something imprudent if they didn’t consider themselves uncertain on contested EA issues, such as whether creating happy people is morally important. I’m trying to push back on that: Depending on the specifics, I think forming convictions on such matters can be fine/prudent.
Stepping back a bit, I think a big thrust of my post is that you generally shouldn’t make statements like “anti-realism is obviously true” because the nature of evidence for that claim is pretty weak, even if the nature of the arguments for you reaching that conclusion were clear and are internally compelling to you.
I’m with you regarding the part about evidence being comparatively weak/brittle. Elsewhere, you wrote:
But stepping back, this back and forth looks like another example of the move I criticized above because you are making some analogies and arguing some conclusion follows from those analogies, I’m denying those analogies, and therefore denying the conclusion, and making different analogies. Neither of us has the kind of definitive evidence on their side that prevails in science domains here.
Yeah, this does characterize philosophical discussions. At the same time, I’d say that’s partly the point behind anti-realism, so I don’t think we all have to stay uncertain on realism vs. anti-realism. I see anti-realism as the claim that we cannot do better than argument via analogies (or, as I would say, “Does this/that way of carving out the option space appeal to us/strike us as complete?”). For comparison, moral realism would then be the claim that there’s more to it, that the domain is closer/more analogous to the natural sciences. (No need to click the link for the context of continuing this discussion, but I elaborate on these points in my post on why realists and anti-realists disagree. In short, I discuss that famous duck-rabbit illusion picture as an example/analogy of how we can contextualize philosophical disagreements under anti-realism: Both the duck and the rabbit are part of the structure on the page and it’s up to us to decide which interpretation we want to discuss/focus on, which one we find appealing in various ways, which one we may choose to orient our lives around, etc.)
You’ve defined moral realism narrowly so perhaps this is neither here nor there but, as you may be aware, most English-speaking philosophers accept/lean towards moral realism despite you noting in this comment that many EAs who have been influential have been anti-realists (broadly defined). This isn’t compelling evidence, but it is evidence against the claim that anti-realism is “obviously correct” since you are at least implicitly claiming most philosophers are wrong about this issue.
(On the topic of definitions, I don’t think that the disagreements would go away if the surveys had used my preferred definitions, so I agree that expert disagreement constitutes something I should address. (Definitions not matching between philosophers isn’t just an issue with how I defined moral realism, BTW. I’d say that many philosophers’ definitions draw the line in different places, so it’s not like I did anything unusual.))
I should add that I’m not attached to defending a strong meaning of “obviously correct” – I just meant that I myself don’t have doubts (and I think I’m justified to view it that way). I understand that things don’t seem obvious to all professional philosophers.
But maybe I’m conceding too much here – just going by the survey results alone, the results would be compatible with all the philosophers thinking that the question is easy/obvious (they justhappen to disagree). :) (I don’t expect this to be the case for all philosophers, of course, but many of them, at least, will feel very confident in their views!) This highlights that it’s not straightforward to go from “experts disagree on this topic” to “it is inappropriate for anyone to confidently take sides.” Experts themselves are often confident, so, if your epistemology places a lot of weight on deferring to experts, there’s a tension of “If being confident is imprudent, how can you still regard these experts as experts?” (Considerations like that are part of why I’m skeptical about what some EAs have called “modest epistemology.”)
Anyway, “professional philosophers” may sound intimidating as an abstract class, but if we consider the particular individuals who this makes up, it’s less clear that all or even most of them warrant epistemic deference. Even the EAs who are into modest epistemology would probably feel quite taken aback by some of the things that people with credentials in philosophy sometimes say on philosophy topics that EAs have thought about a lot about and have formed confident views on, such as animal ethics, bioethics, EA ideas like earning to give, etc. So, I’d say we’re often comfortable to rule out individual professional philosophers from being our “intellectual peers” after they voice disqualifying bad takes. (Note that “intellectual peers” is here meant as a very high bar – much higher than “we should assume that we can learn something from them.” Instead, this is about, “Even if it looks to us like they’re wrong, we should update part of the way towards them anyway, solely out of respect to their good judgment and reasoning abilities.”) From that that (ruling out concrete instances of individual professional philosophers because we observe some of their shocking bad takes), it’s not much further to no longer considering the more abstract-feeling reference class of “professional philosophers” as sacred.
Where should we best turn to for experts on moral realism/anti-realism? I would say that EAs of all people have the most skin in the game – we orient our lives to the outcomes of our moral deliberations (much more so than the typical academic philosophers). Sure, there are exceptions in both camps:
Parfit said that his life’s work is in vain if he’s wrong about metaethics, and this is in line with his actions (he basically worked on super-long metaethics books and follow-up essays and discussion up to or close to the point where he died).
Many EAs seem to treat metaethics more as a fun discussion activity than something they are deeply invested in getting to the bottom of (at least that’s my takeaway from reading the recent post by Bentham’s Bulldog and the discussions that came from it, which annoyed me because of how much people were just trying to re-invent the wheel instead of engaging with or referencing canonical writings (in EA or outside of it).
FWIW, I don’t think metaethics is super important. It’s not completely unimportant, though, and I think EAs are often insufficiently ambitious about it being possible to “get to the bottom of it,” which I BTW find to be a counterproductive stance that is limiting for their intellectual development.
To get back to the point about where to find worthy experts, I think among EAs you’ll find the most people are super invested in being right about these things and thinking about them deeply, so I’d put much more stock on them than on the opinions of professional academics.
Looking at the opinion landscape within EA, I actually get the impression that anti-realism wins out (see the comment you already linked to further above), especially because of the ones who have sympathies for moral realism, this is often because of intuition-based wagers (where the person admits that things look as though moral realism is false but they also say they perceive anti-realism as pointless) and deferring towards professional philosophers – which all seem like indirect reasons for belief. There’s also a conspicuous absence of in-depth EA posts that directly defend realism. (Even the one “pro realism” post that comes to my mind that I quite liked – Ben Garfinkel’s Realism and Rationality – contained a passage like “I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious.”) By contrast, with writings that defend anti-realism, it’s been not just me.
I could retort here that it seems totally reasonable to argue that there’s a fact of the matter about what caused the Big Bang or how life on Earth began. What caused these could conceivably be totally inaccessible to us now but still related to known facts.
Just for ease of having the context nearby, in this passage you are replying to the following section of my post (which you also quoted):
Still, I think the notion of “forever inaccessible moral facts” is incomprehensible, not just pointless. Perhaps(?) we can meaningfully talk about “unreachable facts of unknown nature,” but it seems strange to speak of unreachable facts of some known nature (such as “moral” nature). By claiming that a fact is of some known nature, aren’t we (implicitly) saying that we know of a way to tell why that fact belongs to the category? If so, this means that the fact is knowable, at least in theory, since it belongs to a category of facts whose truth-making properties we understand. If some fact were truly “forever unknowable,” it seems like it would have to be a fact of a nature we don’t understand. Whatever those forever unknowable facts may be, they couldn’t have anything to do with concepts we already understand, such as our “moral concepts” of the form (e.g.,) “Torturing innocent children is wrong.”
Going by your reply, I think we were talking past each other. (Re-reading my passage, I unfortunately don’t find it very clear.) I agree that abiogenesis or what caused the big bang might be “totally inaccessible to us now but still related to known facts.” But I’d say it’s at least accessible to ideally-positioned and arbitrarily powerful observers. So, these things (abiogenesis, cause behind the big bang, if there was any) are related to known facts because we know the sorts of stories we’d have to tell in a science fiction book to convince readers that there are intelligent observers who justifiably come to believe specific things about these events. (E.g., perhaps aliens in space suits conduct experiments on Earth 3.8 billion years ago, or cosmologists in a different part of the multiverse, “one level above ours,” study how new universe bubbles get birthed.) By contrast, the point I meant to make is that the types of facts that proponents of the elusive type of irreducible normativity want to postulate are much more weird and, well, elusive. They aren’t just unreachable for practical purposes, they are unreachable in every possible sense, even in science fiction stories where we can make the intelligent observers arbitrarily well-positioned and powerful. (This is because the point behind irreducible normativity is that we might not be able to trust our faculties when it comes to moral facts. No matter how elaborate of a story we tell where intelligent observers develop confident takes on object-level morality, there is always the question “Are they correct?.) This setup renders these irreducibly normative facts pointless, though. If someone refuses to give any accounts of “what is it that makes moral claims true,” they have thereby created a “category of fact” that is, in virtue of how it was set up, completely disconnected from anything else.
One way I might take this (not saying you’d agree) would be to say you think moral realism that isn’t action guiding on the contentious points isn’t moral realism worth the name because all the value of the name is in the contentious points (and this may be particularly true in EA).
This is indeed how I’ve defined moral realism! :)
As I say in the tension post, I’m okay with “minimalist moral realism is true.” I don’t feel that minimalist moral realism deserves to be called “moral realism,” but this is just a semantic choice. (My reasoning is that it would lead to confusion because I’ve never heard anyone say something like, “Yeah moral realism is true, but population ethics looks under-defined to me and so multiple answers to it seem defensible.” In practice, people in EA who are moral realists often assume that there’s a correct moral theory that address all the contested domains including population ethics. (By contrast, in academia you’ll even find moral realists who are moral particularlists, meaning don’t even buy into the notion that we want to generalize moral principles across lots of situations, something that almost all EAs are interested in doing.)
But perhaps a broader issue is I, unlike many other effective altruists, am actually cool with (in your words) “minimalist moral realism” being fine and using aggregation methods like those mentioned above to come to final takes about what to do given the uncertainty.
Cool! The only thing I would add then is again my point about how, depending on the specifics, it can be prudent to be confident about one’s values even in areas where many others EAs disagree or feel fundamentally uncertain.
I suspect that some readers may find this counterintuitive because “If morality is under-defined, why form convictions on parts of it that are under-defined? Why not just use bargaining to get a compromise among all the different views that seem defensible?”
Thanks for engaging with my comment (and my writing more generally)!
I agree that the bargaining you reference works well for resolving value uncertainty (or resolving value disagreements via compromise) even if anti-realism is true. Still, I want to flag that for individuals reflecting on their values, there are people who, due to factors like the nature and strength of their moral intuitions and their history of forming convictions related to their EA work, (etc.,) will want to do things differently and have fewer areas than your post would suggest where they remain fundamentally uncertain. Reading your post, there’s an implication that a person would be doing something imprudent if they didn’t consider themselves uncertain on contested EA issues, such as whether creating happy people is morally important. I’m trying to push back on that: Depending on the specifics, I think forming convictions on such matters can be fine/prudent.
I’m with you regarding the part about evidence being comparatively weak/brittle. Elsewhere, you wrote:
Yeah, this does characterize philosophical discussions. At the same time, I’d say that’s partly the point behind anti-realism, so I don’t think we all have to stay uncertain on realism vs. anti-realism. I see anti-realism as the claim that we cannot do better than argument via analogies (or, as I would say, “Does this/that way of carving out the option space appeal to us/strike us as complete?”). For comparison, moral realism would then be the claim that there’s more to it, that the domain is closer/more analogous to the natural sciences. (No need to click the link for the context of continuing this discussion, but I elaborate on these points in my post on why realists and anti-realists disagree. In short, I discuss that famous duck-rabbit illusion picture as an example/analogy of how we can contextualize philosophical disagreements under anti-realism: Both the duck and the rabbit are part of the structure on the page and it’s up to us to decide which interpretation we want to discuss/focus on, which one we find appealing in various ways, which one we may choose to orient our lives around, etc.)
(On the topic of definitions, I don’t think that the disagreements would go away if the surveys had used my preferred definitions, so I agree that expert disagreement constitutes something I should address. (Definitions not matching between philosophers isn’t just an issue with how I defined moral realism, BTW. I’d say that many philosophers’ definitions draw the line in different places, so it’s not like I did anything unusual.))
I should add that I’m not attached to defending a strong meaning of “obviously correct” – I just meant that I myself don’t have doubts (and I think I’m justified to view it that way). I understand that things don’t seem obvious to all professional philosophers.
But maybe I’m conceding too much here – just going by the survey results alone, the results would be compatible with all the philosophers thinking that the question is easy/obvious (they justhappen to disagree). :) (I don’t expect this to be the case for all philosophers, of course, but many of them, at least, will feel very confident in their views!) This highlights that it’s not straightforward to go from “experts disagree on this topic” to “it is inappropriate for anyone to confidently take sides.” Experts themselves are often confident, so, if your epistemology places a lot of weight on deferring to experts, there’s a tension of “If being confident is imprudent, how can you still regard these experts as experts?” (Considerations like that are part of why I’m skeptical about what some EAs have called “modest epistemology.”)
Anyway, “professional philosophers” may sound intimidating as an abstract class, but if we consider the particular individuals who this makes up, it’s less clear that all or even most of them warrant epistemic deference. Even the EAs who are into modest epistemology would probably feel quite taken aback by some of the things that people with credentials in philosophy sometimes say on philosophy topics that EAs have thought about a lot about and have formed confident views on, such as animal ethics, bioethics, EA ideas like earning to give, etc. So, I’d say we’re often comfortable to rule out individual professional philosophers from being our “intellectual peers” after they voice disqualifying bad takes. (Note that “intellectual peers” is here meant as a very high bar – much higher than “we should assume that we can learn something from them.” Instead, this is about, “Even if it looks to us like they’re wrong, we should update part of the way towards them anyway, solely out of respect to their good judgment and reasoning abilities.”) From that that (ruling out concrete instances of individual professional philosophers because we observe some of their shocking bad takes), it’s not much further to no longer considering the more abstract-feeling reference class of “professional philosophers” as sacred.
Another (shorter) way to get rid of that sacredness intuition: something like 70% of philosophers of religion are theists.
Where should we best turn to for experts on moral realism/anti-realism? I would say that EAs of all people have the most skin in the game – we orient our lives to the outcomes of our moral deliberations (much more so than the typical academic philosophers). Sure, there are exceptions in both camps:
Parfit said that his life’s work is in vain if he’s wrong about metaethics, and this is in line with his actions (he basically worked on super-long metaethics books and follow-up essays and discussion up to or close to the point where he died).
Many EAs seem to treat metaethics more as a fun discussion activity than something they are deeply invested in getting to the bottom of (at least that’s my takeaway from reading the recent post by Bentham’s Bulldog and the discussions that came from it, which annoyed me because of how much people were just trying to re-invent the wheel instead of engaging with or referencing canonical writings (in EA or outside of it).
FWIW, I don’t think metaethics is super important. It’s not completely unimportant, though, and I think EAs are often insufficiently ambitious about it being possible to “get to the bottom of it,” which I BTW find to be a counterproductive stance that is limiting for their intellectual development.
To get back to the point about where to find worthy experts, I think among EAs you’ll find the most people are super invested in being right about these things and thinking about them deeply, so I’d put much more stock on them than on the opinions of professional academics.
Looking at the opinion landscape within EA, I actually get the impression that anti-realism wins out (see the comment you already linked to further above), especially because of the ones who have sympathies for moral realism, this is often because of intuition-based wagers (where the person admits that things look as though moral realism is false but they also say they perceive anti-realism as pointless) and deferring towards professional philosophers – which all seem like indirect reasons for belief. There’s also a conspicuous absence of in-depth EA posts that directly defend realism. (Even the one “pro realism” post that comes to my mind that I quite liked – Ben Garfinkel’s Realism and Rationality – contained a passage like “I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious.”) By contrast, with writings that defend anti-realism, it’s been not just me.
Just for ease of having the context nearby, in this passage you are replying to the following section of my post (which you also quoted):
Going by your reply, I think we were talking past each other. (Re-reading my passage, I unfortunately don’t find it very clear.) I agree that abiogenesis or what caused the big bang might be “totally inaccessible to us now but still related to known facts.” But I’d say it’s at least accessible to ideally-positioned and arbitrarily powerful observers. So, these things (abiogenesis, cause behind the big bang, if there was any) are related to known facts because we know the sorts of stories we’d have to tell in a science fiction book to convince readers that there are intelligent observers who justifiably come to believe specific things about these events. (E.g., perhaps aliens in space suits conduct experiments on Earth 3.8 billion years ago, or cosmologists in a different part of the multiverse, “one level above ours,” study how new universe bubbles get birthed.) By contrast, the point I meant to make is that the types of facts that proponents of the elusive type of irreducible normativity want to postulate are much more weird and, well, elusive. They aren’t just unreachable for practical purposes, they are unreachable in every possible sense, even in science fiction stories where we can make the intelligent observers arbitrarily well-positioned and powerful. (This is because the point behind irreducible normativity is that we might not be able to trust our faculties when it comes to moral facts. No matter how elaborate of a story we tell where intelligent observers develop confident takes on object-level morality, there is always the question “Are they correct?.) This setup renders these irreducibly normative facts pointless, though. If someone refuses to give any accounts of “what is it that makes moral claims true,” they have thereby created a “category of fact” that is, in virtue of how it was set up, completely disconnected from anything else.
(I don’t feel like I’m great at explain this so I’ll also mention that Joe Carlsmith wrote about the same themes in The Ignorance of Normative Realism Bot, The Despair of Normative Realism Bot, and Against the Normative Realist’s Wager.)
This is indeed how I’ve defined moral realism! :)
As I say in the tension post, I’m okay with “minimalist moral realism is true.” I don’t feel that minimalist moral realism deserves to be called “moral realism,” but this is just a semantic choice. (My reasoning is that it would lead to confusion because I’ve never heard anyone say something like, “Yeah moral realism is true, but population ethics looks under-defined to me and so multiple answers to it seem defensible.” In practice, people in EA who are moral realists often assume that there’s a correct moral theory that address all the contested domains including population ethics. (By contrast, in academia you’ll even find moral realists who are moral particularlists, meaning don’t even buy into the notion that we want to generalize moral principles across lots of situations, something that almost all EAs are interested in doing.)
Cool! The only thing I would add then is again my point about how, depending on the specifics, it can be prudent to be confident about one’s values even in areas where many others EAs disagree or feel fundamentally uncertain.
I suspect that some readers may find this counterintuitive because “If morality is under-defined, why form convictions on parts of it that are under-defined? Why not just use bargaining to get a compromise among all the different views that seem defensible?”
I wrote a short dialogue on exactly this question in the “Anticipating Objections (Dialogue)” section of my sequence’s last post.