Overall, I’d say there’s for sure going to be some degree of moral convergence, but it’s often overstated, and whether the degree of convergence is strong enough to warrant going for the AI strategies you discuss in your subsequent posts (e.g., here) would IMO depend on a tricky weighting of risks and benefits (including the degree to which alternatives seem promising).
Does moral realism imply the convergent morality thesis? Not strictly, although it’s suggestive. And even if you believe both, presumably there’s some causal mechanism behind convergent morality. Personally, though, I find many intuitions that used to make me sympathetic to realism now make me sympathetic to the convergent morality thesis.
I agree with this endnote.
For my anti-realism sequence, I’ve actually made the stylistic choice of defining (one version of) moral realism as implying moral convergence (at least under ideal reasoning circumstances). That’s notably different from how philosophers typically define it. I went for my idiosnycratic definition because, when I tried to find out what are the action-guiding versions of moral realism (here), many ways in which philosophers have defined “moral realism” in the literature don’t actually seem relevant for what we should do as effective altruists. I could only come up with two (very different!) types of moral realism that would have clear implications for effective altruism.
(1) Non-naturalist moral realism based on the (elusive?) concept of irreducible normativity.
(2) Naturalist moral realism where the true morality is what people who are interested in “doing the most moral/altruistic thing” would converge on under ideal reflection conditions.
(See this endnote where I further justify my choice of (2) against some possible objections.)
I think (1) just doesn’t work as a concept, and (2) is almost certainly false at least in its strongest form. But yeah, there’s going to be degrees of convergence, and moral reflection (even at the individual level without convergence) is relevant also from within a moral anti-realist reasoning framework.
This comment I just made on Will Aldred’s Long Reflection Reading List seems relevant for this topic.
Overall, I’d say there’s for sure going to be some degree of moral convergence, but it’s often overstated, and whether the degree of convergence is strong enough to warrant going for the AI strategies you discuss in your subsequent posts (e.g., here) would IMO depend on a tricky weighting of risks and benefits (including the degree to which alternatives seem promising).
I agree with this endnote.
For my anti-realism sequence, I’ve actually made the stylistic choice of defining (one version of) moral realism as implying moral convergence (at least under ideal reasoning circumstances). That’s notably different from how philosophers typically define it. I went for my idiosnycratic definition because, when I tried to find out what are the action-guiding versions of moral realism (here), many ways in which philosophers have defined “moral realism” in the literature don’t actually seem relevant for what we should do as effective altruists. I could only come up with two (very different!) types of moral realism that would have clear implications for effective altruism.
(1) Non-naturalist moral realism based on the (elusive?) concept of irreducible normativity.
(2) Naturalist moral realism where the true morality is what people who are interested in “doing the most moral/altruistic thing” would converge on under ideal reflection conditions.
(See this endnote where I further justify my choice of (2) against some possible objections.)
I think (1) just doesn’t work as a concept, and (2) is almost certainly false at least in its strongest form. But yeah, there’s going to be degrees of convergence, and moral reflection (even at the individual level without convergence) is relevant also from within a moral anti-realist reasoning framework.