However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it’s a leap of logic to go from “because your stated ambition is to do risk analysis for all of us” to “That means that even if I don’t want to wear your brand, I can demand that you answer the questions of [...]” – even if we add the hidden premise “this is about expected harms caused by EA.” Just because EA does “risk analysis for all sentient beings” doesn’t mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it’s far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about “you fail at your aims” rather than “you pose a risk to all of us,” then my initial point still applies, that EA doesn’t have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you “you fail at your aims.”
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed—in a causal sense—to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as “bureaucracy,” which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it’s a leap of logic to go from “because your stated ambition is to do risk analysis for all of us” to “That means that even if I don’t want to wear your brand, I can demand that you answer the questions of [...]” – even if we add the hidden premise “this is about expected harms caused by EA.” Just because EA does “risk analysis for all sentient beings” doesn’t mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it’s far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about “you fail at your aims” rather than “you pose a risk to all of us,” then my initial point still applies, that EA doesn’t have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you “you fail at your aims.”
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed—in a causal sense—to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as “bureaucracy,” which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.