I love your blog and reliably find it to provide the highest-quality EA criticism I have found. I shifted my view on a handful of issues based on it.
It may be helpful for non-philosophy readers to know that the journals these paper are published in are very impressive. For example, Ethics (Mistakes in Moral Math of Longtermism paper) is the most well-regarded ethics journal I know of in our discipline, akin to e.g. what Science or Nature would be for a natural scientist.
I am somewhat disheartened that those papers did not gain visible uptake from key players in the EA space (e.g. 80K, Openphil), especially since it was published when most EA organizations strike me as moving strongly towards longtermism/AI risk. My sense is that it was briefly acknowledged, then simply ignored. I don’t think that the same would have happened with e.g. a Science or Nature paper.
To stick with the Mistakes in Moral Math paper, for example: I think it puts forward a very strong argument against the very few explicit numerical models of EV calculations for longtermist causes. A natural longtermist response would be to either adjust models or present new models, incorporating factors such as background risk that are currently not factored in. I have not seen any such models. Rather, I feel like longtermist pitches often get very handwavey when pressed on explicit EV models that compare their interventions to e.g. AMF or Give Directly. I take it to be a central pitch of your paper that it is very bad that we have almost no explicit numerical models, and that those we have neglect crucial factors_. To me, it seems like that very valid criticism went largely unheard. I have not seen new numerical EV calculations for longtermist causes since publication. This may of course be a me problem—please send me any such comparative analyses you know!
I don’t want to end on such a gloomy note—even if I were right that these criticisms are valid, and that EA fails to update on them, I am very happy that you do this work. Other critics often strike me as arguing in bad faith or being fundamentally misinformed—it is good to have a good-faith, quality critique to discuss with people. And in my EA-adjacent house, we often discuss your work over beers and food and greatly enjoy it haha. Please keep it coming!
I love your blog and reliably find it to provide the highest-quality EA criticism I have found. I shifted my view on a handful of issues based on it.
It may be helpful for non-philosophy readers to know that the journals these paper are published in are very impressive. For example, Ethics (Mistakes in Moral Math of Longtermism paper) is the most well-regarded ethics journal I know of in our discipline, akin to e.g. what Science or Nature would be for a natural scientist.
I am somewhat disheartened that those papers did not gain visible uptake from key players in the EA space (e.g. 80K, Openphil), especially since it was published when most EA organizations strike me as moving strongly towards longtermism/AI risk. My sense is that it was briefly acknowledged, then simply ignored. I don’t think that the same would have happened with e.g. a Science or Nature paper.
To stick with the Mistakes in Moral Math paper, for example: I think it puts forward a very strong argument against the very few explicit numerical models of EV calculations for longtermist causes. A natural longtermist response would be to either adjust models or present new models, incorporating factors such as background risk that are currently not factored in. I have not seen any such models. Rather, I feel like longtermist pitches often get very handwavey when pressed on explicit EV models that compare their interventions to e.g. AMF or Give Directly. I take it to be a central pitch of your paper that it is very bad that we have almost no explicit numerical models, and that those we have neglect crucial factors_. To me, it seems like that very valid criticism went largely unheard. I have not seen new numerical EV calculations for longtermist causes since publication. This may of course be a me problem—please send me any such comparative analyses you know!
I don’t want to end on such a gloomy note—even if I were right that these criticisms are valid, and that EA fails to update on them, I am very happy that you do this work. Other critics often strike me as arguing in bad faith or being fundamentally misinformed—it is good to have a good-faith, quality critique to discuss with people. And in my EA-adjacent house, we often discuss your work over beers and food and greatly enjoy it haha. Please keep it coming!