I don’t think it’s all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.
Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don’t really think any of these are that likely on their own — I just think it’s generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.
The various charities don’t produce enough value to offset the harms of FTX (seems likely they already have produced more to me, but I haven’t thought about it)
Things around accidentally accelerating AI capabilities in ways that end up being harmful
Things around accidentally accelerating various bio capabilities in ways that end up being harmful.
Enabling some specific person into entering a position of power where they end up doing a lot of harm.
X-risk from AI is overblown, and the E/accs are right about the potential of AI, and lots of harm is caused by trying to slow AI development/regulate it.
There is even stronger reactionary response to some future EA effort that makes things worse is some way.
Most of the risk from AI is algorithmic bias/related things, and AI folks’ conflict with people in that field ends up being harmful for reducing it.
Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it’s less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.
I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA’s net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.
But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn’t matter in the longrun from EA.
I don’t think it’s all net-negative — I think there are lots of worlds where EA has lots of good and bad that kind of wash out, or where the overall sign is pretty ambiguous in the longrun.
Here are lots of ways I think are possible EA could end up causing a lot of possible harm. I don’t really think any of these are that likely on their own — I just think it’s generally easier to cause harm than produce good, so there are lots of ways EA can accidentally not achieve being overall positive, and I generally think it has an uphill road to climb to end up not being a neutral or ambiguous quirk in the ash heap of history.
The various charities don’t produce enough value to offset the harms of FTX (seems likely they already have produced more to me, but I haven’t thought about it)
Things around accidentally accelerating AI capabilities in ways that end up being harmful
Things around accidentally accelerating various bio capabilities in ways that end up being harmful.
Enabling some specific person into entering a position of power where they end up doing a lot of harm.
X-risk from AI is overblown, and the E/accs are right about the potential of AI, and lots of harm is caused by trying to slow AI development/regulate it.
There is even stronger reactionary response to some future EA effort that makes things worse is some way.
Most of the risk from AI is algorithmic bias/related things, and AI folks’ conflict with people in that field ends up being harmful for reducing it.
Using EV only for making decisions accidentally leads to a really bad world, even when all decisions made were positive EV.
EA crowds out other better effective giving efforts that could have arisen.
I note that these risks hardly apply to GHD work ;).
Can you explain how FTX harm could plausible outweigh good done by EA? I can’t fathom a scenario where this is the case myself.
Yeah, I think there are probably parts of EA that will look robustly good in the long run, and part of the reason I think that it’s less likely EA as a whole will be less likely to be positive (and more likely to be neutral or negative) are that actions in other areas of EA could impact those areas negatively. Though this could cut both in favor of or against GHD work. I think just having a positive impact is quite hard, even more so when doing a bunch of uncorrelated things when some of them have major downside risks.
I think it is pretty unlikely that FTX harm outweighs good done by EA on its own, but it seems easy enough to imagine that conditional on EA’s net benefit being barely above neutral (which for other reasons mentioned above seems pretty possible to me, along with EA increasingly working on GCRs which directly increases the likelihood EA work ends up being net-negative or neutral, even if in expectation that shift is positive value), that the scale of the stress / financial harm caused by EA via FTX, outweighs that remaining benefit. And then there is brand damage to effective giving, etc.
But yeah, I agree that my original statement above seems a lot less likely than FTX just contributing to an overall portfolio of harm or work that doesn’t matter in the longrun from EA.