I think the core argument here is that not enough strategic work in terms of crisp arguments and numerical EV calculations are done within cause- and intervention- level prioritization in EA. Ironically (fittingly?), I think this article can be substantially improved with crisp arguments and numerical EV calculations illustrating some subpoints.
I feel somewhat guilty of this myself, as I think I don’t use modeling or EV calculations nearly as much as I endorse, personally.
Secondarily, I think this post seems to assume that coordination problems are easily solved, such that evaluation systems can be easily used and deployed (if only they were funded). While I’m bullish about evaluation systems (such as some of QURI’s work), I think you’re underestimating the general difficulty of this type of thing.
Tertiarily, and somewhat related to the first point, I think the post can be improved by showcasing some of the preferred traits or actions that you believe others in EA should emulate. Aaron calls it “”Someone should do X” syndrome,” I usually refer to it with this short story:
“It was a difficult job”, he thought to himself, “but someone had to do it.”
As he walked away, he wondered who that someone would be.
Finally, I think some of this post and your follow-up comments feel to me somewhat “slippery.” like rhetoric more optimized to “win” than to crisply engage with and consider the truth. (This may partially be why Charles acted so aggressively). I think more neutral language may help you get your point across better.
I guess the author missed some of the details / had a slightly vague or adversarial form, but the core point just seems really important to bring up.
Tertiarily, and somewhat related to the first point, I think the post can be improved by showcasing some of the preferred traits or actions that you believe others in EA should emulate.
Maybe the author doesn’t say it explicitly, but the post seems to strongly be pointing at “(a) EAs making career decisions should make quantitative estimates of all promising career paths. (b) EA organizations should have someone making (maybe public) quantitative estimates of all possible directions for the organization to explore / fund, including that the marginal dollar spent doing whatever it’s doing is better than the marginal dollar spent on philanthropic advising, or creating Stanislav Petrovs, or persuading police officers to do suicide reduction.”
I think the core argument here is that not enough strategic work in terms of crisp arguments and numerical EV calculations are done within cause- and intervention- level prioritization in EA. Ironically (fittingly?), I think this article can be substantially improved with crisp arguments and numerical EV calculations illustrating some subpoints.
I feel somewhat guilty of this myself, as I think I don’t use modeling or EV calculations nearly as much as I endorse, personally.
Secondarily, I think this post seems to assume that coordination problems are easily solved, such that evaluation systems can be easily used and deployed (if only they were funded). While I’m bullish about evaluation systems (such as some of QURI’s work), I think you’re underestimating the general difficulty of this type of thing.
Tertiarily, and somewhat related to the first point, I think the post can be improved by showcasing some of the preferred traits or actions that you believe others in EA should emulate. Aaron calls it “”Someone should do X” syndrome,” I usually refer to it with this short story:
Finally, I think some of this post and your follow-up comments feel to me somewhat “slippery.” like rhetoric more optimized to “win” than to crisply engage with and consider the truth. (This may partially be why Charles acted so aggressively). I think more neutral language may help you get your point across better.
I guess the author missed some of the details / had a slightly vague or adversarial form, but the core point just seems really important to bring up.
Maybe the author doesn’t say it explicitly, but the post seems to strongly be pointing at “(a) EAs making career decisions should make quantitative estimates of all promising career paths. (b) EA organizations should have someone making (maybe public) quantitative estimates of all possible directions for the organization to explore / fund, including that the marginal dollar spent doing whatever it’s doing is better than the marginal dollar spent on philanthropic advising, or creating Stanislav Petrovs, or persuading police officers to do suicide reduction.”