Well, I’ve been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren’t necessarily usefwl. I might reconsider it. :p
Keep in mind that in my own framework, I’m an Explorer, not an Expert. Not safe to defer to.
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we’ve got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Some (controversial) reasons I’m surprisingly optimistic about the community:
1) It’s already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they’re likely to improve opinions), and the top-level tries to avoid conforming, there’s a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn’t that hard to attain for conscientious smart people, I think, so smart people aren’t stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it’s worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I’m not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3
Thanks<3
Well, I’ve been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren’t necessarily usefwl. I might reconsider it. :p
Keep in mind that in my own framework, I’m an Explorer, not an Expert. Not safe to defer to.
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we’ve got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Some (controversial) reasons I’m surprisingly optimistic about the community:
1) It’s already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they’re likely to improve opinions), and the top-level tries to avoid conforming, there’s a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn’t that hard to attain for conscientious smart people, I think, so smart people aren’t stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it’s worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I’m not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3