Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thanks for the thoughtful post!
Some of the disconnect here might be semantic—my sense is people here often use “moral progress” to refer to “progress in people’s moral views,” while you seem to be using the term to mean both that and also other kinds of progress.
Other than that, I’d guess people might not yet be sold on how tractable and high-leverage these interventions are, especially in comparison to other interventions this community has identified. If you or others have more detailed cases to make on the tractability of any of these important problems, I’d be curious to see them, and I imagine others would be, too. (As you might have guessed, you might find more ears if you argue for relevance to x-risks, since the risk aversion of global health and development parts of EA seems to leave them with little interest in hard-to-measure interventions.)
I can understand not prioritizing these issues for grant-making, because of tractability. But if something is highly important, and no one is making progress on it, shouldn’t there at least be a lot of discussion about it, even if we don’t yet see tractable approaches? Like, shouldn’t there be energy in trying to find tractability? That seems missing, which makes me think that the issues are underrated in terms of importance.
Have you ever looked into how Enlightenment ideas came about and started spreading in the first place? I have but only in a very shallow way. Here’s a couple of my previous comments about it:
https://www.greaterwrong.com/posts/jAixPHwn5bmSLXiMZ/open-and-welcome-thread-february-2020/comment/ELoAu5rzid7gLitjz
https://www.greaterwrong.com/posts/EQGcZr3vTyAe6Aiei/transitive-tolerance-means-intolerance/comment/LuXJaxaSLm6FBRLtR
Only a little bit. In part they were a reaction to the religious wars that plagued Europe for centuries.
It seems key to the project of “defense of Enlightenment ideas” to figure out whether the Age of Enlightenment came about mainly through argumentation and reasoning, or mainly through (cultural) variation and selection. If the former, then we might be able to defend Enlightenment ideas just by, e.g., reminding people of the arguments behind them. But if it’s the latter, then we might suspect that the recent decline of Enlightenment ideas was caused by weaker selection pressure towards them (allowing “cultural drift” to happen to a greater extent), or even a change in the direction of the selection pressure. Depending on the exact nature of the changes, either of these might be much harder to reverse.
A closely related line of inquiry is, what exactly was/is the arguments behind Enlightenment ideas? Did the people who adopted them do so for the right reasons? (My shallow investigation linked above suggests that the answer is at least plausibly “no”.) In either case, how sure are we that they’re the right ideals/values for us? While it seems pretty clear that Enlightenment ideas historically had good consequences in terms of, e.g., raising the living standards of many people, how do we know that they’ll still have net positive consequences going forward?
To try to steelman the anti-Enlightenment position:
People in “liberal” societies “reason” themselves into harmful conclusions all the time, and are granted “freedom” to act out their conclusions.
In an environment where everyone has easy access to worldwide multicast communication channels, “free speech” may lead to virulent memes spreading uncontrollably (and we’re already seeing the beginnings of this).
If everyone adopts Enlightenment ideas, then we face globally correlated risks of (1) people causing harm on increasingly large scales and (2) cultures evolving into things we wouldn’t recognize and/or endorse.
One reason for avoiding talking about “1-to-N” moral progress on a public EA forum is that it is inherently political. I agree with you on essentially all the issues you mentioned in the post, but I also realise that most people in the world and even in developed nations will find at least one of your positions grossly offensive—if not necessarily when stated as above, then certainly after they are taken to their logical conclusions.
Discussing how to achieve concrete goals in “1-to-N” moral progress would almost certainly lead “moral reactionaries” to start attacking the EA community, calling us “fascists” / “communists” / “deniers” / “blasphemers” depending on which kind of immorality they support. This would make life very difficult for other EAs.
Maybe the potential benefits are large enough to exceed the costs, but I don’t even know how we could go about estimating either of these.