I think what you said is insightful and worth considering further. Nonetheless, I will only address a specific subpoint for now, and revisit this later.
Basically, have you considered the perspective that “some EA orgs aren’t very good” to be a better explanation for the problems?
Hmm I’m not sure what you mean, and I think it’s very likely we’re talking about different problems. But assuming we’re talking about the same problems, at a high-level any prediction problem can be decomposed to bias vs error (aka noise, aka variance).
I perceive that many of the issues I’ve mentioned to be better explained by bias than error. In particular I just don’t think we’ll see equivalently many errors in the opposite direction. This is an empirical question however, and I’d be excited to see more careful followups to test this hypothesis.
(as a separate point, I do think some EA orgs aren’t very good, with “very good” defined as I’d rather the $s be spent on their work rather than in Open Phil coffers, or my own bank account. I imagine many other EAs would feel similarly about my own work).
Thank you for your thoughtful reply. I think you are generous here:
I perceive that many of the issues I’ve mentioned to be better explained by bias than error. In particular I just don’t think we’ll see equivalently many errors in the opposite direction. This is an empirical question however, and I’d be excited to see more careful followups to test this hypothesis.
I think you are pointing out that, when I said I think I have many biases and these are inevitable, that I am confusing bias with error.
What you are pointing out seems right to me.
Now, at the very least, this undermines my comment (and at the worst suggests I am promoting/suffering from some other form of arrogance). I’m less confident about my comment now. I think now I will reread and think about your post a lot more.
Hi. I’m glad you appear to have gained a lot from my quick reply, but for what it’s worth I did not intend my reply as an admonishment.
I think the core of what I read as your comment is probably still valid. Namely, that if I misidentified problems as biases when almost all of the failures are due to either a) noise/error or b) incompetence unrelated to decision quality (eg mental health, insufficient technical skills, we aren’t hardworking enough), then the bias identification isn’t true or useful. Likewise, debiasing is somewhere between neutral to worse than useless if the problem was never bias to begin with.
I think what you said is insightful and worth considering further. Nonetheless, I will only address a specific subpoint for now, and revisit this later.
Hmm I’m not sure what you mean, and I think it’s very likely we’re talking about different problems. But assuming we’re talking about the same problems, at a high-level any prediction problem can be decomposed to bias vs error (aka noise, aka variance).
I perceive that many of the issues I’ve mentioned to be better explained by bias than error. In particular I just don’t think we’ll see equivalently many errors in the opposite direction. This is an empirical question however, and I’d be excited to see more careful followups to test this hypothesis.
(as a separate point, I do think some EA orgs aren’t very good, with “very good” defined as I’d rather the $s be spent on their work rather than in Open Phil coffers, or my own bank account. I imagine many other EAs would feel similarly about my own work).
Hi,
Thank you for your thoughtful reply. I think you are generous here:
I think you are pointing out that, when I said I think I have many biases and these are inevitable, that I am confusing bias with error.
What you are pointing out seems right to me.
Now, at the very least, this undermines my comment (and at the worst suggests I am promoting/suffering from some other form of arrogance). I’m less confident about my comment now. I think now I will reread and think about your post a lot more.
Thanks again.
Hi. I’m glad you appear to have gained a lot from my quick reply, but for what it’s worth I did not intend my reply as an admonishment.
I think the core of what I read as your comment is probably still valid. Namely, that if I misidentified problems as biases when almost all of the failures are due to either a) noise/error or b) incompetence unrelated to decision quality (eg mental health, insufficient technical skills, we aren’t hardworking enough), then the bias identification isn’t true or useful. Likewise, debiasing is somewhere between neutral to worse than useless if the problem was never bias to begin with.