To be clear, I’m still a huge fan of GiveWell. GiveWell only shows up in so many examples in my post because I’m so familiar with the organization.
I mostly agree with the points Holden makes in his cluster thinking post (and his other related posts). Despite that, I still have serious reservations about some of the decision-making strategies used both at GW and in the EA community at large. It could be that Holden and I mostly agree, but other people take different positions. It could be that Holden and I agree about a lot of things at a high-level but then have significantly different perspectives about how those things we agree on at a high-level should actually manifest themselves in concrete decision making.
For what it’s worth, I do feel like the page you linked to from GiveWell’s website may downplay the role cost-effectiveness plays in its final recommendations (though GiveWell may have a good rebuttal).
In a response to Taymon’s comment, I left a specific example of something I’d like to see change. In general, I’d like people to be more reluctant to brute-force push their way through uncertainty by putting numbers on things. I don’t think people need to stop doing that entirely, but I think it should be done while keeping in mind something like: “I’m using lots of probabilities in a domain where I have no idea if I’m well-calibrated...I need to be extra skeptical of whatever conclusions I reach.”
Fair enough. I remain in almost-total agreement, so I guess I’ll just have to try and keep an eye out for what you describe. But based on what I’ve seen within EA, which is evidently very different to what you’ve seen, I’m more worried about little-to-zero quantification than excessive quantification.
That’s interesting—and something I may not have considered enough. I think there’s a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.
For what it’s worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of “having excessive faith in systematic or mathematical thinking styles.” Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.
FWIW, as someone who was and is broadly sympathetic to the aims of the OP, my general impression agrees with “excessive quantification in some areas of the EA but not enough of it in other areas.”
(I think the full picture has more nuance than I can easily convey, e.g. rather than ‘more vs. less quantification’ it often seems more important to me how quantitative estimates are being used—what role they play in the overall decision-making or discussion process.)
Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can’t give any
Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I’m often not convinced by the use of quantitative models in research (e.g. the “Racing to the Precipice” paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED’s impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more quantitative statements were being made—e.g. “this was one of the two most interesting papers I read this year” is much more informative than “I enjoyed reading your paper”. Again, this is somewhat more subtle than I can easily convey: in particular, I’m definitely not saying that e.g. the ALLFED model or the “Racing to the Precipice” paper shouldn’t have been made—it’s more that I wish they would have been accompanied by a more careful qualitative analysis, and would have been used to find conceptual insights and test assumptions rather than as a direct argument for certain practical conclusions.
I’d also be excited to see more people in the EA movement doing the sort of work that I think would put society in a good position for handling future problems when they arrive. E.g., I think a lot of people who associate with EA might be awfully good and pushing for progress in metascience/open science or promoting a free & open internet.
A recent example of this happening might be EA LTF Fund grants to various organizations trying to improve societal epistemic rationality (e.g. by supporting prediction markets)
Thanks for raising this.
To be clear, I’m still a huge fan of GiveWell. GiveWell only shows up in so many examples in my post because I’m so familiar with the organization.
I mostly agree with the points Holden makes in his cluster thinking post (and his other related posts). Despite that, I still have serious reservations about some of the decision-making strategies used both at GW and in the EA community at large. It could be that Holden and I mostly agree, but other people take different positions. It could be that Holden and I agree about a lot of things at a high-level but then have significantly different perspectives about how those things we agree on at a high-level should actually manifest themselves in concrete decision making.
For what it’s worth, I do feel like the page you linked to from GiveWell’s website may downplay the role cost-effectiveness plays in its final recommendations (though GiveWell may have a good rebuttal).
In a response to Taymon’s comment, I left a specific example of something I’d like to see change. In general, I’d like people to be more reluctant to brute-force push their way through uncertainty by putting numbers on things. I don’t think people need to stop doing that entirely, but I think it should be done while keeping in mind something like: “I’m using lots of probabilities in a domain where I have no idea if I’m well-calibrated...I need to be extra skeptical of whatever conclusions I reach.”
Fair enough. I remain in almost-total agreement, so I guess I’ll just have to try and keep an eye out for what you describe. But based on what I’ve seen within EA, which is evidently very different to what you’ve seen, I’m more worried about little-to-zero quantification than excessive quantification.
That’s interesting—and something I may not have considered enough. I think there’s a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.
For what it’s worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of “having excessive faith in systematic or mathematical thinking styles.” Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.
FWIW, as someone who was and is broadly sympathetic to the aims of the OP, my general impression agrees with “excessive quantification in some areas of the EA but not enough of it in other areas.”
(I think the full picture has more nuance than I can easily convey, e.g. rather than ‘more vs. less quantification’ it often seems more important to me how quantitative estimates are being used—what role they play in the overall decision-making or discussion process.)
Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can’t give any
Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I’m often not convinced by the use of quantitative models in research (e.g. the “Racing to the Precipice” paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED’s impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more quantitative statements were being made—e.g. “this was one of the two most interesting papers I read this year” is much more informative than “I enjoyed reading your paper”. Again, this is somewhat more subtle than I can easily convey: in particular, I’m definitely not saying that e.g. the ALLFED model or the “Racing to the Precipice” paper shouldn’t have been made—it’s more that I wish they would have been accompanied by a more careful qualitative analysis, and would have been used to find conceptual insights and test assumptions rather than as a direct argument for certain practical conclusions.
I’d also be excited to see more people in the EA movement doing the sort of work that I think would put society in a good position for handling future problems when they arrive. E.g., I think a lot of people who associate with EA might be awfully good and pushing for progress in metascience/open science or promoting a free & open internet.
A recent example of this happening might be EA LTF Fund grants to various organizations trying to improve societal epistemic rationality (e.g. by supporting prediction markets)