What the Moral Truth might be makes no difference to what will happen

Many longtermists seem hopeful that our successors (or any advanced civilization/​superintelligence) will eventually act in accordance with some moral truth.[1] While I’m sympathetic to some forms of moral realism, I believe that such a scenario is fairly unlikely for any civilization and even more so for the most advanced/​expansionist ones. This post briefly explains why.

To be clear, my case does under no circumstances imply that we should not act according to what we think might be a moral truth. I simply argue that we can’t assume that our successors—or any powerful civilization—will “do the (objectively) right thing”. And this matters for longtermist cause prioritization.

Epistemic status: Since I believe the ideas in this post to be less important than those in future ones within this sequence, I wrote it quickly and didn’t ask anyone for thorough feedback before posting, which makes me think I’m more likely than usual to have missed important considerations. Let me know what you think!

Update April 10th: When I first posted this, the title was “It Doesn’t Matter what the Moral Truth might be”. I realized this was misleading. It was making it look like I was making a strong normative claim regarding what matters while my goal was actually to predict what might happen, so I changed it.

Rare are those who will eventually act in accordance with some moral truth

For agents to do what might objectively be the best thing to do, you need all these conditions to be met:

  1. There is a moral truth.

  2. It is possible to “find it” and recognize it as such.

  3. They find something they recognize as a moral truth.

  4. They (unconditionally) accept it, even if it is highly counterintuitive.

  5. The thing they found is actually the moral truth. No normative mistake.

  6. They succeed at acting in accordance with it. No practical mistake.

  7. They stick to this forever. No value drift.

I think these seven conditions are generally quite unlikely to be all met at the same time, mainly for the following reasons:

  • (Re: condition #1) While I find compelling the argument that (some of) our subjective experiences are instantiations of objective (dis)value (see Rawlette 2016; Vinding 2014), I am highly skeptical about claims of moral truths that are not completely dependent on sentience.

  • (Re: #2) I don’t see why we should assume it is possible to “find” (with a sufficient degree of certainty) the moral truth, especially if it is more complex than – or different from – something like “pleasure is good and suffering is bad.”

  • (Re: #3 and #4) If they “find” a moral truth and don’t like what it says, they might very well not act in accordance with it?[2]

  • (Re: #3, #4, #5, and #7) Within a civilization, we should expect the agents who have the values that are the most adapted/​competitive to survival, replication, and expansion to eventually be selected for (see, e.g., Bostrom 2004; Hanson 1998), and I see no reason to suppose the moral truth is particularly well adapted to those things.

Even if they’re not rare, their impact will stay marginal

Now, let’s actually assume that many smart agents converge on THE moral truth and effectively optimize for whatever it says. The thing is that, for reasons analogous to what we addressed in the last bullet point above, we may expect civilizations—or groups/​individuals within a civilization—adopting the moral truth to be less competitive than those who have the values that are the most adaptive and adapted to space colonization races.

My subsequent post investigates this selection effect in more detail, but here is an intuition pump: Say Denmark wants to follow the moral truth which is to maximize the sum , where is something valuable and something disvaluable. Meanwhile, France just wants something close to “occupy as much space territory as possible”. While the Danes face a trade-off between (A) spreading and building military weapons/​defenses as fast as possible and (B) investing in “colonization safety”[3] to make sure they actually end up optimizing for what the moral truth says, the French don’t face this trade-off and can just go all-in on (A), which gives them an evolutionary advantage. The significance of this selection effect, here, depends on whether the moral truth is among – or close to – the most “expansion-conducive” intrinsic goals civilizations can plausibly have, and I doubt that it is.

Conclusion

Acting in accordance with some moral truth requires the unlikely successful succession of many non-obvious steps.

Also, values don’t come out of nowhere. They are the product of evolutionary processes. We should expect the most adaptive and adapted values to be the most represented, at least among the most expansionist societies. And how true a moral theory might be seems fairly orthogonal to how competitive it is,[4] such that we—a priori—have no good reason to expect (the most powerful) civilizations/​agents to do what might be objectively good.

If I’m roughly correct, this implies that the “discoverable moral reality” argument in favor of assuming the future will be good (see Anthis 2022) is pretty bad. This probably also has more direct implications for longtermist cause prioritization that will be addressed in subsequent posts within this sequence.

Acknowledgment

My work on this sequence so far has been funded by Existential Risk Alliance.

All assumptions/​claims/​omissions are my own.

  1. ^

    This is informed by informal interactions I had, plus my recollections of claims made in some podcasts I can’t recall. I actually can’t find anything fleshing out this exact idea, surprisingly, and I don’t think it’s worth spending more time searching. Please, share in the comments if you can think of any!

  2. ^

    Interestingly, Brian Tomasik (2014) writes: “Personally, I don’t much care what the moral truth is even if it exists. If the moral truth were published in a book, I’d read the book out of interest, but I wouldn’t feel obligated to follow its commands. I would instead continue to do what I am most emotionally moved to do.”

  3. ^

    After a thorough evaluation, they might even realize that the best way to maximize their utility requires avoiding colonizing space (e.g., because the expected disvalue of conflict with France or conflict with alien civilizations is too high).

  4. ^

    This comment thread discusses an interesting argument of Wei Dai that challenges this claim.