I’m concerned that characterizing harms attributable to EA as roughly several orders of magnitude less bad than 200K lives and saving humanity from AIare good could be used to dismiss an awful lot of bad stuff, including things significantly worse than anything in the quotation from @titotal.
Even accepting the argument and applying it to Scott Alexander’s blog post, I don’t think an orders-of-magnitude or an internal-affairs defenses are fully convincing. Two of the items were:
Donated a few hundred kidneys.
Sparked a renaissance in forecasting, including major roles in creating, funding, and/or staffing Metaculus, Manifold Markets, and the Forecasting Research Institute.
The first bullet involves benefit to a relatively small group of people; it is by utilitarian reckoning several orders of magnitude less than the top-line accomplishments on which the post focuses. Although the number of affected people is unknown, people experiencing significant trauma due to by sexual harassment, sexual assault, and cult-like experiences would not be several orders of magnitude less significant than giving people with kidney disease more and higher-quality years of life.
The second bullet is not worth mentioning for the benefits accrued by the forecasting participants; it is only potentially worth mentioning because of the potentially indirect effects on the world. If that’s fair game despite being predominately inside baseball at this point, then it seems that potentially giving people who created cultish experiences and/or abused power more influence over AI safety than would counterfactually have been the case and potentially making the AI & AI safety communities even more male-centric than they would have counterfactually been due to sexual harassment and assault making women feel unsafe should be fair game too.
I’m concerned that characterizing harms attributable to EA as roughly several orders of magnitude less bad than 200K lives and saving humanity from AI are good could be used to dismiss an awful lot of bad stuff, including things significantly worse than anything in the quotation from @titotal.
Even accepting the argument and applying it to Scott Alexander’s blog post, I don’t think an orders-of-magnitude or an internal-affairs defenses are fully convincing. Two of the items were:
Donated a few hundred kidneys.
Sparked a renaissance in forecasting, including major roles in creating, funding, and/or staffing Metaculus, Manifold Markets, and the Forecasting Research Institute.
The first bullet involves benefit to a relatively small group of people; it is by utilitarian reckoning several orders of magnitude less than the top-line accomplishments on which the post focuses. Although the number of affected people is unknown, people experiencing significant trauma due to by sexual harassment, sexual assault, and cult-like experiences would not be several orders of magnitude less significant than giving people with kidney disease more and higher-quality years of life.
The second bullet is not worth mentioning for the benefits accrued by the forecasting participants; it is only potentially worth mentioning because of the potentially indirect effects on the world. If that’s fair game despite being predominately inside baseball at this point, then it seems that potentially giving people who created cultish experiences and/or abused power more influence over AI safety than would counterfactually have been the case and potentially making the AI & AI safety communities even more male-centric than they would have counterfactually been due to sexual harassment and assault making women feel unsafe should be fair game too.