That’s interesting—and something I may not have considered enough. I think there’s a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.
For what it’s worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of “having excessive faith in systematic or mathematical thinking styles.” Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.
FWIW, as someone who was and is broadly sympathetic to the aims of the OP, my general impression agrees with “excessive quantification in some areas of the EA but not enough of it in other areas.”
(I think the full picture has more nuance than I can easily convey, e.g. rather than ‘more vs. less quantification’ it often seems more important to me how quantitative estimates are being used—what role they play in the overall decision-making or discussion process.)
Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can’t give any
Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I’m often not convinced by the use of quantitative models in research (e.g. the “Racing to the Precipice” paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED’s impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more quantitative statements were being made—e.g. “this was one of the two most interesting papers I read this year” is much more informative than “I enjoyed reading your paper”. Again, this is somewhat more subtle than I can easily convey: in particular, I’m definitely not saying that e.g. the ALLFED model or the “Racing to the Precipice” paper shouldn’t have been made—it’s more that I wish they would have been accompanied by a more careful qualitative analysis, and would have been used to find conceptual insights and test assumptions rather than as a direct argument for certain practical conclusions.
That’s interesting—and something I may not have considered enough. I think there’s a real possibility that there could be excessive quantification in some areas of the EA but not enough of it in other areas.
For what it’s worth, I may have made this post too broad. I wanted to point out a handful of issues that I felt all kind of fell under the umbrella of “having excessive faith in systematic or mathematical thinking styles.” Maybe I should have written several posts on specific topics that get at areas of disagreement a bit more concretely. I might get around to those posts at some point in the future.
FWIW, as someone who was and is broadly sympathetic to the aims of the OP, my general impression agrees with “excessive quantification in some areas of the EA but not enough of it in other areas.”
(I think the full picture has more nuance than I can easily convey, e.g. rather than ‘more vs. less quantification’ it often seems more important to me how quantitative estimates are being used—what role they play in the overall decision-making or discussion process.)
Can you elaborate on which areas of EA might tend towards each extreme? Specific examples (as vague as needed) would be awesome too, but I understand if you can’t give any
Unfortunately I find it hard to give examples that are comprehensible without context that is either confidential or would take me a lot of time to describe. Very very roughly I’m often not convinced by the use of quantitative models in research (e.g. the “Racing to the Precipice” paper on several teams racing to develop AGI) or for demonstrating impact (e.g. the model behind ALLFED’s impact which David Denkenberger presented in some recent EA Forum posts). OTOH I often wish that for organizational decisions or in direct feedback more quantitative statements were being made—e.g. “this was one of the two most interesting papers I read this year” is much more informative than “I enjoyed reading your paper”. Again, this is somewhat more subtle than I can easily convey: in particular, I’m definitely not saying that e.g. the ALLFED model or the “Racing to the Precipice” paper shouldn’t have been made—it’s more that I wish they would have been accompanied by a more careful qualitative analysis, and would have been used to find conceptual insights and test assumptions rather than as a direct argument for certain practical conclusions.