I agree insofar as status as an intended EA beneficiary does not presumptively provide someone with standing demand answers from EA about risk management. However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
I think the LOTR analogy is inapt. Taking Zoe’s comment here at face value, she is not suggesting that everyone put Project Mount Doom on hold until the Council of Elrond runs some public-opinion surveys. She is suggesting that reform ideas warrant further development and discussion. That’s closer to asking for some time of a mid-level bureaucrat at Rivendell and a package of lembas than diverting Frodo. Yes, it may be necessary to bring Frodo in at some point, but only if preliminary work suggests it would be worthwhile to do so.
I recognize that there could be some scenarios in which the utmost single-mindedness is essential: the Nagzul have been sighted near the Ringbearer. But other EA decisions don’t suggest that funders and leaders are at Alert Condition Nagzul. For example, while I don’t have a clear opinion on the Wytham purchase, it seems to have required a short-term expenditure of time and lock-up of funds for an expected medium-to-long-run payoff.
However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it’s a leap of logic to go from “because your stated ambition is to do risk analysis for all of us” to “That means that even if I don’t want to wear your brand, I can demand that you answer the questions of [...]” – even if we add the hidden premise “this is about expected harms caused by EA.” Just because EA does “risk analysis for all sentient beings” doesn’t mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it’s far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about “you fail at your aims” rather than “you pose a risk to all of us,” then my initial point still applies, that EA doesn’t have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you “you fail at your aims.”
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed—in a causal sense—to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as “bureaucracy,” which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.
I agree insofar as status as an intended EA beneficiary does not presumptively provide someone with standing demand answers from EA about risk management. However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing.
I think the LOTR analogy is inapt. Taking Zoe’s comment here at face value, she is not suggesting that everyone put Project Mount Doom on hold until the Council of Elrond runs some public-opinion surveys. She is suggesting that reform ideas warrant further development and discussion. That’s closer to asking for some time of a mid-level bureaucrat at Rivendell and a package of lembas than diverting Frodo. Yes, it may be necessary to bring Frodo in at some point, but only if preliminary work suggests it would be worthwhile to do so.
I recognize that there could be some scenarios in which the utmost single-mindedness is essential: the Nagzul have been sighted near the Ringbearer. But other EA decisions don’t suggest that funders and leaders are at Alert Condition Nagzul. For example, while I don’t have a clear opinion on the Wytham purchase, it seems to have required a short-term expenditure of time and lock-up of funds for an expected medium-to-long-run payoff.
Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it’s a leap of logic to go from “because your stated ambition is to do risk analysis for all of us” to “That means that even if I don’t want to wear your brand, I can demand that you answer the questions of [...]” – even if we add the hidden premise “this is about expected harms caused by EA.” Just because EA does “risk analysis for all sentient beings” doesn’t mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it’s far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house).
There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here, recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized the paper for not mentioning any of the ways EAs themselves have engaged with this concern.)
By contrast, if the criticism of EA is more about “you fail at your aims” rather than “you pose a risk to all of us,” then my initial point still applies, that EA doesn’t have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much worth listening if a reasonable-seeming and informed person tells you “you fail at your aims.”
I would have agreed pre-FTX. In my view, EA actors meaningfully contributed—in a causal sense—to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes.
EA has a considerable appetite for risk and little of what some commenter are dismissing as “bureaucracy,” which increases the odds of other harms felt externally. So the presumption is not rebutted in my book.