But if youâre engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see
If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify.
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murrayâs case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backwardâreversing cause and effect. I call these the âwet streets cause rainâ stories. Paperâs full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
The correct default response to this effect, in my view, mostly does not look like âignoring the bad arguments and paying attention to the best onesâ. Thatâs almost exactly the approach the above quote describes and (imo correctly) mocks; ignoring the show business article because your expertise lets you see the arguments are bad and taking the Palestine article seriously because the arguments appear to be good.
I think the correct default response is something closer to âfocus on your areas of expertise, and see how the proponents conduct themselves within that area. Then use that as your starting point for guessing at their accurracy in areas which you know less wellâ.
Of course Iâm expressing a super idealistic position here, and there are practical reasons not to be all the way there
I appreciate stuff like the above is part of why you wrote this. I still wanted to register that I think this framing is backwards; I donât think you should evaluate the strength of arguments across all domains as they come and then adjust for trustworthiness of the person making them; in general I think itâs much better (measured by believing more true things) to assess the trustworthiness of the person in some domain you understand well and only then adjust to a limited extent based on the apparent strength of the arguments made in other domains.
Itâs plausible that this boils down to a question of âhow good are humans at assessing the strength of arguments in areas they know little aboutâ. In the ideal, we are perfect. In reality, I think I am pretty terrible at it, in pretty much exactly the way the Gell-Mann quote describes, and so want to put minimal weight on those feelings of strength; they just donât have enough predictive power to justify moving my priors all that much. YMMV.
I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge itâs roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that thereâs a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).
Ah, gotcha. I agree that the process of scientific enquiry/âdiscovery works best when people do as you said.
I think itâs worth distinguishing between that case where taking the less accurate path in the short-term has longer-term benefits, and more typical decisions like âwhat should I work onâ, or even just truth-seeking that doesnât have a decision directly attached but you want to get the right answer. There are definitely people who still believe what you wrote literally in those cases and ironically I think itâs a good example of an argument that sounds compelling but is largely incorrect, for reasons above.
Just wanted to quickly hop in to say that I think this little sub-thread contains interesting points on both sides, and that people who stumble upon it later may also be interested in Forum posts tagged âepistemic humilityâ.
If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify.
Related: Gell-Mann Amnesia
(Edit: Also related, Epistemic Learned Helplessness)
The correct default response to this effect, in my view, mostly does not look like âignoring the bad arguments and paying attention to the best onesâ. Thatâs almost exactly the approach the above quote describes and (imo correctly) mocks; ignoring the show business article because your expertise lets you see the arguments are bad and taking the Palestine article seriously because the arguments appear to be good.
I think the correct default response is something closer to âfocus on your areas of expertise, and see how the proponents conduct themselves within that area. Then use that as your starting point for guessing at their accurracy in areas which you know less wellâ.
I appreciate stuff like the above is part of why you wrote this. I still wanted to register that I think this framing is backwards; I donât think you should evaluate the strength of arguments across all domains as they come and then adjust for trustworthiness of the person making them; in general I think itâs much better (measured by believing more true things) to assess the trustworthiness of the person in some domain you understand well and only then adjust to a limited extent based on the apparent strength of the arguments made in other domains.
Itâs plausible that this boils down to a question of âhow good are humans at assessing the strength of arguments in areas they know little aboutâ. In the ideal, we are perfect. In reality, I think I am pretty terrible at it, in pretty much exactly the way the Gell-Mann quote describes, and so want to put minimal weight on those feelings of strength; they just donât have enough predictive power to justify moving my priors all that much. YMMV.
I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge itâs roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that thereâs a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).
Ah, gotcha. I agree that the process of scientific enquiry/âdiscovery works best when people do as you said.
I think itâs worth distinguishing between that case where taking the less accurate path in the short-term has longer-term benefits, and more typical decisions like âwhat should I work onâ, or even just truth-seeking that doesnât have a decision directly attached but you want to get the right answer. There are definitely people who still believe what you wrote literally in those cases and ironically I think itâs a good example of an argument that sounds compelling but is largely incorrect, for reasons above.
Just wanted to quickly hop in to say that I think this little sub-thread contains interesting points on both sides, and that people who stumble upon it later may also be interested in Forum posts tagged âepistemic humilityâ.