What does moral progress consist of?
There’s something that strikes me as odd about the way I hear “moral progress” discussed in the EA/longtermist community in general and in What We Owe the Future in particular.
The issues that are generally discussed under this heading are things like: animal welfare, existential risk, “coordination” mechanisms, and so forth.
However, when I look out at the world, I see a lot of problems rarely discussed among EAs. Most of the world is still deeply religious; in some communities and even whole nations, this pervades the culture and can even create oppression. Much of the world still lives under authoritarian or even totalitarian governments. Much of the world lives in countries that lack the institutional capacity to support even rudimentary economic development, leaving their people without electricity, clean water, etc.
And the conversation around the world’s problems leaves even more to be desired. A lot of discussion of environmental issues is lacking in scientific or technical understanding, treating nature like a god and industry like a sin—there’s even now a literal “degrowth” movement. Many young people in the US are now attracted to socialism, an ideology that should have been left behind in the 20th century. Others support Trump, an authoritarian demagogue with no qualifications to leadership. Both sides seem eager to tear down the institutions of liberalism, a “burn it all down” mentality. And all around me I see people more concerned with tribal affiliations than with truth-seeking.
So when I think about what moral progress the world needs, I mostly think it needs a defense of Enlightenment ideas such as reason and liberalism, so that these ideas can become the foundation for addressing the real problems and threats we face.
I think this might be highly relevant even to someone solely concerned with existential risk. For instance, if we want to make sure that an extinction-level weapon doesn’t get in the hands of a radical terrorist group, it would be good if there were fewer fundamentalist ideologies in the world, and no nation-states that sponsor them. More prosaically, if we want to have a good response to pandemics, it would be good to have competent leadership instead of the opposite (my understanding is that the US covid response could have been much better if we had just followed the pre-existing pandemic response plan). If we want to make sure civilization deals with climate change, it would be good to have a world that believed in technological solutions rather than being locked in a battle over “degrowth.” Etc.
Looking at it another way, we could think about two dimensions of moral progress, analogous to two dimensions of economic progress: pushing forward the frontier, vs. distribution of a best-practice standard. Zero-to-one progress vs. one-to-N progress. EA folks are very focused on pushing forward the moral frontier, breaking new moral ground—but I’m very worried about, well, let’s call it “moral inequality”: simple best practices like “allow freedom of speech,” “give women equality,” or even “use reason and science” are nowhere near universal.
These kind of concerns are what drew me to “progress studies” in the first place (before that term even existed). I see progress studies first and foremost as an intellectual defense of progress as such, and ultimately of the Enlightenment ideas that underlie it.
But I never hear EA folks talk about these kinds of issues, and these ideas don’t seem to resonate with the community when I bring them up. I’m still left wondering, what is the disconnect here?
Thanks for the thoughtful post!
Some of the disconnect here might be semantic—my sense is people here often use “moral progress” to refer to “progress in people’s moral views,” while you seem to be using the term to mean both that and also other kinds of progress.
Other than that, I’d guess people might not yet be sold on how tractable and high-leverage these interventions are, especially in comparison to other interventions this community has identified. If you or others have more detailed cases to make on the tractability of any of these important problems, I’d be curious to see them, and I imagine others would be, too. (As you might have guessed, you might find more ears if you argue for relevance to x-risks, since the risk aversion of global health and development parts of EA seems to leave them with little interest in hard-to-measure interventions.)
I can understand not prioritizing these issues for grant-making, because of tractability. But if something is highly important, and no one is making progress on it, shouldn’t there at least be a lot of discussion about it, even if we don’t yet see tractable approaches? Like, shouldn’t there be energy in trying to find tractability? That seems missing, which makes me think that the issues are underrated in terms of importance.
Have you ever looked into how Enlightenment ideas came about and started spreading in the first place? I have but only in a very shallow way. Here’s a couple of my previous comments about it:
https://www.greaterwrong.com/posts/jAixPHwn5bmSLXiMZ/open-and-welcome-thread-february-2020/comment/ELoAu5rzid7gLitjz
https://www.greaterwrong.com/posts/EQGcZr3vTyAe6Aiei/transitive-tolerance-means-intolerance/comment/LuXJaxaSLm6FBRLtR
Only a little bit. In part they were a reaction to the religious wars that plagued Europe for centuries.
It seems key to the project of “defense of Enlightenment ideas” to figure out whether the Age of Enlightenment came about mainly through argumentation and reasoning, or mainly through (cultural) variation and selection. If the former, then we might be able to defend Enlightenment ideas just by, e.g., reminding people of the arguments behind them. But if it’s the latter, then we might suspect that the recent decline of Enlightenment ideas was caused by weaker selection pressure towards them (allowing “cultural drift” to happen to a greater extent), or even a change in the direction of the selection pressure. Depending on the exact nature of the changes, either of these might be much harder to reverse.
A closely related line of inquiry is, what exactly was/is the arguments behind Enlightenment ideas? Did the people who adopted them do so for the right reasons? (My shallow investigation linked above suggests that the answer is at least plausibly “no”.) In either case, how sure are we that they’re the right ideals/values for us? While it seems pretty clear that Enlightenment ideas historically had good consequences in terms of, e.g., raising the living standards of many people, how do we know that they’ll still have net positive consequences going forward?
To try to steelman the anti-Enlightenment position:
People in “liberal” societies “reason” themselves into harmful conclusions all the time, and are granted “freedom” to act out their conclusions.
In an environment where everyone has easy access to worldwide multicast communication channels, “free speech” may lead to virulent memes spreading uncontrollably (and we’re already seeing the beginnings of this).
If everyone adopts Enlightenment ideas, then we face globally correlated risks of (1) people causing harm on increasingly large scales and (2) cultures evolving into things we wouldn’t recognize and/or endorse.
One reason for avoiding talking about “1-to-N” moral progress on a public EA forum is that it is inherently political. I agree with you on essentially all the issues you mentioned in the post, but I also realise that most people in the world and even in developed nations will find at least one of your positions grossly offensive—if not necessarily when stated as above, then certainly after they are taken to their logical conclusions.
Discussing how to achieve concrete goals in “1-to-N” moral progress would almost certainly lead “moral reactionaries” to start attacking the EA community, calling us “fascists” / “communists” / “deniers” / “blasphemers” depending on which kind of immorality they support. This would make life very difficult for other EAs.
Maybe the potential benefits are large enough to exceed the costs, but I don’t even know how we could go about estimating either of these.