Also my hope was that this would highlight a methodological error (equating made up numbers to real data) that could be rectified, whether or not you buy my other arguments about longtermism. I’d be a lot more sympathetic with longtermism in general if the proponents were careful to adhere to the methodological rule of only ever comparing subjective probabilities with other subjective probabilities (and not subjective probabilities with objective ones, derived from data).
I’m sympathetic to something in the vicinity of your complaint here, striving to compare like with like, and being cognizant of the weaknesses of the comparison when that’s impossible (e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism).
(I don’t think that “subjective” and “objective” are quite the right categories here, btw; e.g. even the GiveWell estimates of cost-to-save-a-life include some subjective components.)
In terms of your general sympathy with longtermism—it makes sense to me that the behaviour of its proponents should affect your sympathy with those proponents. And if you’re thinking of the position as a political stance (who you’re allying yourself etc.) then it makes sense that it could affect your sympathy with the position. But if you’re engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see—whether or not anyone actually made them. (Of course I’m expressing a super idealistic position here, and there are practical reasons not to be all the way there, but I still think it’s worth thinking about.)
But if you’re engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see
If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify.
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.
In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
The correct default response to this effect, in my view, mostly does not look like ‘ignoring the bad arguments and paying attention to the best ones’. That’s almost exactly the approach the above quote describes and (imo correctly) mocks; ignoring the show business article because your expertise lets you see the arguments are bad and taking the Palestine article seriously because the arguments appear to be good.
I think the correct default response is something closer to ‘focus on your areas of expertise, and see how the proponents conduct themselves within that area. Then use that as your starting point for guessing at their accurracy in areas which you know less well’.
Of course I’m expressing a super idealistic position here, and there are practical reasons not to be all the way there
I appreciate stuff like the above is part of why you wrote this. I still wanted to register that I think this framing is backwards; I don’t think you should evaluate the strength of arguments across all domains as they come and then adjust for trustworthiness of the person making them; in general I think it’s much better (measured by believing more true things) to assess the trustworthiness of the person in some domain you understand well and only then adjust to a limited extent based on the apparent strength of the arguments made in other domains.
It’s plausible that this boils down to a question of ‘how good are humans at assessing the strength of arguments in areas they know little about’. In the ideal, we are perfect. In reality, I think I am pretty terrible at it, in pretty much exactly the way the Gell-Mann quote describes, and so want to put minimal weight on those feelings of strength; they just don’t have enough predictive power to justify moving my priors all that much. YMMV.
I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge it’s roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that there’s a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).
Ah, gotcha. I agree that the process of scientific enquiry/discovery works best when people do as you said.
I think it’s worth distinguishing between that case where taking the less accurate path in the short-term has longer-term benefits, and more typical decisions like ‘what should I work on’, or even just truth-seeking that doesn’t have a decision directly attached but you want to get the right answer. There are definitely people who still believe what you wrote literally in those cases and ironically I think it’s a good example of an argument that sounds compelling but is largely incorrect, for reasons above.
Just wanted to quickly hop in to say that I think this little sub-thread contains interesting points on both sides, and that people who stumble upon it later may also be interested in Forum posts tagged “epistemic humility”.
I’m sympathetic to something in the vicinity of your complaint here, striving to compare like with like, and being cognizant of the weaknesses of the comparison when that’s impossible (e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism).
(I don’t think that “subjective” and “objective” are quite the right categories here, btw; e.g. even the GiveWell estimates of cost-to-save-a-life include some subjective components.)
In terms of your general sympathy with longtermism—it makes sense to me that the behaviour of its proponents should affect your sympathy with those proponents. And if you’re thinking of the position as a political stance (who you’re allying yourself etc.) then it makes sense that it could affect your sympathy with the position. But if you’re engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see—whether or not anyone actually made them. (Of course I’m expressing a super idealistic position here, and there are practical reasons not to be all the way there, but I still think it’s worth thinking about.)
If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify.
Related: Gell-Mann Amnesia
(Edit: Also related, Epistemic Learned Helplessness)
The correct default response to this effect, in my view, mostly does not look like ‘ignoring the bad arguments and paying attention to the best ones’. That’s almost exactly the approach the above quote describes and (imo correctly) mocks; ignoring the show business article because your expertise lets you see the arguments are bad and taking the Palestine article seriously because the arguments appear to be good.
I think the correct default response is something closer to ‘focus on your areas of expertise, and see how the proponents conduct themselves within that area. Then use that as your starting point for guessing at their accurracy in areas which you know less well’.
I appreciate stuff like the above is part of why you wrote this. I still wanted to register that I think this framing is backwards; I don’t think you should evaluate the strength of arguments across all domains as they come and then adjust for trustworthiness of the person making them; in general I think it’s much better (measured by believing more true things) to assess the trustworthiness of the person in some domain you understand well and only then adjust to a limited extent based on the apparent strength of the arguments made in other domains.
It’s plausible that this boils down to a question of ‘how good are humans at assessing the strength of arguments in areas they know little about’. In the ideal, we are perfect. In reality, I think I am pretty terrible at it, in pretty much exactly the way the Gell-Mann quote describes, and so want to put minimal weight on those feelings of strength; they just don’t have enough predictive power to justify moving my priors all that much. YMMV.
I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge it’s roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that there’s a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).
Ah, gotcha. I agree that the process of scientific enquiry/discovery works best when people do as you said.
I think it’s worth distinguishing between that case where taking the less accurate path in the short-term has longer-term benefits, and more typical decisions like ‘what should I work on’, or even just truth-seeking that doesn’t have a decision directly attached but you want to get the right answer. There are definitely people who still believe what you wrote literally in those cases and ironically I think it’s a good example of an argument that sounds compelling but is largely incorrect, for reasons above.
Just wanted to quickly hop in to say that I think this little sub-thread contains interesting points on both sides, and that people who stumble upon it later may also be interested in Forum posts tagged “epistemic humility”.