In my experience, the level of disagreeableness and paranoia needed to overcome motivated reasoning is very much above the levels than a casual EA group can sustain.
A less nice way to describe “motivated reasoning” might be “load-bearing delusions”. So it’s not that people can just correct their motivated reasoning to correct reasoning “just so”. For instance, if someone has a need to believe that their work is useful and valuable and engage in motivated reasoning to justify this, it’s not clear to me that they would at all be able to update on “well, actually, for these very good reasons, the thing that you’ve been doing has a negative impact/is totally worthless/is highly suboptimal”. Even if that update is warranted and would be valuable.
Point also applies to self.
Another dynamic which might contribute to information cascades/selection effects is people judging other people’s epistemics based on how much they agree with them on important topics. As a hypothetical example, I might judge an EA newcomer as naïve for not being sufficiently confused about important considerations, but this then leads to me then perceiving that all EAs who are non-naïve are confused about important considerations.
Also, ALLFED gets hit particularly hard because of their quantitative estimate, but I don’t think they’re uniquely terrible, but rather uniquely transparent.
Also, to be clear, the shallow evaluation estimated that their impact was lower, but still pretty large.
I am also mildly amused by the switch from “shallow” to “moderately rigorous” in your description of my review.
Also, ALLFED gets hit particularly hard because of their quantitative estimate, but I don’t think they’re uniquely terrible, but rather uniquely transparent.
Oh I agree. Do you think it’s worth editing my post to make that clearer?
Can you say a bit more about the first point? Do you think of cases of EA groups that where too disagreeable and paranoid to be sustained or cases of the opposite sort? Or maybe cases where motivated reasoning was targeted directly?
I liked the post. Some notes:
In my experience, the level of disagreeableness and paranoia needed to overcome motivated reasoning is very much above the levels than a casual EA group can sustain.
A less nice way to describe “motivated reasoning” might be “load-bearing delusions”. So it’s not that people can just correct their motivated reasoning to correct reasoning “just so”. For instance, if someone has a need to believe that their work is useful and valuable and engage in motivated reasoning to justify this, it’s not clear to me that they would at all be able to update on “well, actually, for these very good reasons, the thing that you’ve been doing has a negative impact/is totally worthless/is highly suboptimal”. Even if that update is warranted and would be valuable.
Point also applies to self.
Another dynamic which might contribute to information cascades/selection effects is people judging other people’s epistemics based on how much they agree with them on important topics. As a hypothetical example, I might judge an EA newcomer as naïve for not being sufficiently confused about important considerations, but this then leads to me then perceiving that all EAs who are non-naïve are confused about important considerations.
Also, ALLFED gets hit particularly hard because of their quantitative estimate, but I don’t think they’re uniquely terrible, but rather uniquely transparent.
Also, to be clear, the shallow evaluation estimated that their impact was lower, but still pretty large.
I am also mildly amused by the switch from “shallow” to “moderately rigorous” in your description of my review.
Oh I agree. Do you think it’s worth editing my post to make that clearer?
Yeah, I’d appreciate that.
Thanks, done.
Can you say a bit more about the first point? Do you think of cases of EA groups that where too disagreeable and paranoid to be sustained or cases of the opposite sort? Or maybe cases where motivated reasoning was targeted directly?