Interesting. I think there are two related concepts here, which I’ll call individual modesty and communal modesty. Individual modesty, meaning that an individual would defer to the perceived experts (potentially within his community) and communal modesty, meaning that the community defers to the relevant external expert opinion. I think EAs tend to have fairly strong individual modesty, but occasionally our communal modesty lets us down.
With most issues that EAs are likely to have strong opinions on, here are a few of my observations:
1. Ethics: I’d guess that most individual EAs think they’re right about the fundamentals- that consequentialism is just better than the alternatives. I’m not sure whether this is more communal or individual immodesty. 2. Economics/ Poverty: I think EAs tend to defer to smart external economists who understand poverty better than core EAs, but are less modest when it comes to what we should prioritise based on expert understanding. 3. Effective Giving: Individuals tend to defer to a communal consensus. We’re the relevant experts here, I think. 4. General forecasting/ Future: Individuals tend to defer to a communal consensus. We think the relevant class is within our community, so we have low communal modesty. 5. Animals: We probably defer to our own intuitions more than we should. Or Brian Tomasik. If you’re anything like me, you think: “he’s probably right, but I don’t really want to think about it”. 6. Geopolitics: I think that we’re particularly bad at communal modesty here—I hear lots of bad memes (especially about China) that seem to be fairly badly informed. But it’s also difficult to work out the relevant expert reference class. 7. AI (doom): Individuals tend to defer to a communal consensus, but tend to lean towards core EA’s 3-20% rather than core-LW/Eliezer’s 99+%. People broadly within our community (EA/ rationalists) genuinely have thought about this issue more than anyone else, but I think there’s a debate whether we should defer to our pet experts or more establishment AI people.
Interesting. I think there are two related concepts here, which I’ll call individual modesty and communal modesty. Individual modesty, meaning that an individual would defer to the perceived experts (potentially within his community) and communal modesty, meaning that the community defers to the relevant external expert opinion. I think EAs tend to have fairly strong individual modesty, but occasionally our communal modesty lets us down.
With most issues that EAs are likely to have strong opinions on, here are a few of my observations:
1. Ethics: I’d guess that most individual EAs think they’re right about the fundamentals- that consequentialism is just better than the alternatives. I’m not sure whether this is more communal or individual immodesty.
2. Economics/ Poverty: I think EAs tend to defer to smart external economists who understand poverty better than core EAs, but are less modest when it comes to what we should prioritise based on expert understanding.
3. Effective Giving: Individuals tend to defer to a communal consensus. We’re the relevant experts here, I think.
4. General forecasting/ Future: Individuals tend to defer to a communal consensus. We think the relevant class is within our community, so we have low communal modesty.
5. Animals: We probably defer to our own intuitions more than we should. Or Brian Tomasik. If you’re anything like me, you think: “he’s probably right, but I don’t really want to think about it”.
6. Geopolitics: I think that we’re particularly bad at communal modesty here—I hear lots of bad memes (especially about China) that seem to be fairly badly informed. But it’s also difficult to work out the relevant expert reference class.
7. AI (doom): Individuals tend to defer to a communal consensus, but tend to lean towards core EA’s 3-20% rather than core-LW/Eliezer’s 99+%. People broadly within our community (EA/ rationalists) genuinely have thought about this issue more than anyone else, but I think there’s a debate whether we should defer to our pet experts or more establishment AI people.