This is valuable, but at a certain point the market of ideas relies on people actually engaging in object level reasoning. There’s an obvious failure mode in rejecting adopting new ideas on the sole meta-level basis that if they were good they would already be popular. Kind of like the old joke of the economist who refuses to pick up hundred-dollar bills off the ground because of the Efficient Market Hypothesis.
EA & Aspiring Rationalism have grown fairly rapidly, all told! But they’re also fairly new. “Experts in related fields haven’t thought much about EA approaches” is more promising than “experts in related fields have thought a lot about EA approaches and have standard reasons to reject them.”
(Although “most experts have clear reasons to reject EA thinking on their subject matter” is closer to being the case in AI … but that’s probably also the field with the most support for longtermist & x-risk type thinking & where it’s seen the fastest growth, IDK.)
We sort of seem to be doing the opposite to me—see for example some of the logic behind this post and some of the comments on it (though I like the post and think it’s useful).
Only a small red flag, IMO, because it’s rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.
More broadly I often think a good way to test if we are right is if we can convince others. If we can’t that’s kind of a red flag in itself.
This is valuable, but at a certain point the market of ideas relies on people actually engaging in object level reasoning. There’s an obvious failure mode in rejecting adopting new ideas on the sole meta-level basis that if they were good they would already be popular. Kind of like the old joke of the economist who refuses to pick up hundred-dollar bills off the ground because of the Efficient Market Hypothesis.
EA & Aspiring Rationalism have grown fairly rapidly, all told! But they’re also fairly new. “Experts in related fields haven’t thought much about EA approaches” is more promising than “experts in related fields have thought a lot about EA approaches and have standard reasons to reject them.”
(Although “most experts have clear reasons to reject EA thinking on their subject matter” is closer to being the case in AI … but that’s probably also the field with the most support for longtermist & x-risk type thinking & where it’s seen the fastest growth, IDK.)
We sort of seem to be doing the opposite to me—see for example some of the logic behind this post and some of the comments on it (though I like the post and think it’s useful).
Agree that it is a red flag. However, I also think that sometimes we have to bite the bullet on this.
Only a small red flag, IMO, because it’s rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.