I’ve seen EA writing (particularly about AI safety) that goes something like: I know X and Y thought leaders in AI safety, they’re exceptionally smart people with opinion A, so even though I personally think opinion B is more defensible, I also think I should be updating my natural independent opinion in the direction of A, because they’re way smarter and more knowledgeable than me.
I’m struggling to see how this update strategy makes sense. It seems to have merit when X and Y know/understand things that literally no other expert knows, but aside from that, in all other scenarios that come to mind, it seems neutral at best, otherwise a worse strategy than totally disregarding the “thought leader status” of X and Y.
The reasoning is that knowledgeable people’s beliefs in a certain view is evidence for that view.
This is a type of reasoning people use a lot in many different contexts. I think it’s a valid and important type of reasoning (even though specific instances of it can of course be mistaken).
What you describe in your first paragraph sounds to me like a good updating strategy, except I would say that you’re not updating your “natural independent opinion,” you’re updating your all-things-considered belief.
Related short posts I recommend—the first explains the distinction I’m pointing at, and the second shows how things can go wrong if people don’t track it:
I’ve seen EA writing (particularly about AI safety) that goes something like:
I know X and Y thought leaders in AI safety, they’re exceptionally smart people with opinion A, so even though I personally think opinion B is more defensible, I also think I should be updating my natural independent opinion in the direction of A, because they’re way smarter and more knowledgeable than me.
I’m struggling to see how this update strategy makes sense. It seems to have merit when X and Y know/understand things that literally no other expert knows, but aside from that, in all other scenarios that come to mind, it seems neutral at best, otherwise a worse strategy than totally disregarding the “thought leader status” of X and Y.
Am I missing something?
The reasoning is that knowledgeable people’s beliefs in a certain view is evidence for that view.
This is a type of reasoning people use a lot in many different contexts. I think it’s a valid and important type of reasoning (even though specific instances of it can of course be mistaken).
Some references:
https://plato.stanford.edu/entries/disagreement/#EquaWeigView
https://www.routledge.com/Why-Its-OK-Not-to-Think-for-Yourself/Matheson/p/book/9781032438252
https://forum.effectivealtruism.org/posts/WKPd79PESRGZHQ5GY/in-defence-of-epistemic-modesty
What you describe in your first paragraph sounds to me like a good updating strategy, except I would say that you’re not updating your “natural independent opinion,” you’re updating your all-things-considered belief.
Related short posts I recommend—the first explains the distinction I’m pointing at, and the second shows how things can go wrong if people don’t track it:
‘Independent impressions’
‘When reporting AI timelines, be clear who you’re deferring to’