By contrast, I’m not sure there is a similarly rationalizing explanation for why many EAs agree on both (i) there’s a moral imperative for cost-effectiveness, and (ii) you should one-box in Newcomb’s problem, and for why many know more about cognitive biases than about the leading theories for why the Industrial Revolution started in Europe rather than China.
Super interesting point!
I want to think about this more. Presently, I wouldn’t be surprised if (i) to (iii) all appealed more to a certain shape of mind – which could generate conformity along some axes.
It probably is, but I don’t think this explanation is rationalizing. I.e. I don’t think this founder effect would provide a good reason to think that this distribution of knowledge and opinions is conducive to reaching the community’s goals.
Super interesting point!
I want to think about this more. Presently, I wouldn’t be surprised if (i) to (iii) all appealed more to a certain shape of mind – which could generate conformity along some axes.
Is this not explained by founder effects from Less Wrong?
It probably is, but I don’t think this explanation is rationalizing. I.e. I don’t think this founder effect would provide a good reason to think that this distribution of knowledge and opinions is conducive to reaching the community’s goals.
Sure, but that just pushes the interesting question back a level – question becomes “why was LessWrong a viable project / Eliezer a viable founder?”