Almost all EA projects have low downside risk in absolute terms
I agree with some of the points on point 1, though other than FTX, I don’t think the downside risk of any of those examples is very large. I’d walk back my claim to the downside risk to most EA projects seems low (but there are ofc exceptions).
on
There are almost no examples of criticism clearly mattering
Agree that criticisms of AI companies can be good, I don’t really consider them EA projects but it wasn’t clear that was what I was referring to in my post—my bad. Responding quickly to some of the other ones.
Idk if these are “EA” projects. I think I’m much more pessimistic than you are that these posts made better things happen in the world. I’d guess that people overupdated on these somewhat. That said, I quite like these posts and the discussion in the commentts.
Gossip-based criticism of Leverage clearly mattered and imo it would have been better if it was more public
This also seems good, though it was a long time ago and I wasn’t around when leverage was a thing.
Sign seems pretty negative to me. Like even the title is misleading and this generated a lot of drama.
Back in the era when EA discussions happened mainly on Facebook there were all sorts of critiques and flame wars between protest-tactics and incremental-change-tactics for animal advocacy, I don’t think this particularly changed what any given organization tried to do, but it surely changed views of individual people
Not familiar but maybe this is useful? Idk.
Open Phil and RP both had pieces that were pretty critical of clean meat work iirc that were large updates for me. I don’t think they were org-level critiques, but I could imagine a version of them being critiques of GFI.
So overall, I think I stand by the claim that there aren’t many criticisms that clearly mattered, but this was a positive update for me. Maybe I should have said that a very small fraction of critical EA forum posts have clear positive effects or give people useful information.
I agree with some of the points on point 1, though other than FTX, I don’t think the downside risk of any of those examples is very large
Fwiw I find it pretty plausible that lots of political action and movement building for the sake of movement building has indeed had a large negative impact, such that I feel uncertain about whether I should shut it all down if I had the option to do so (if I set aside concerns like unilateralism). I also feel similarly about particular examples of AI safety research but definitely not for the field as a whole.
Agree that criticisms of AI companies can be good, I don’t really consider them EA projects but it wasn’t clear that was what I was referring to in my post
Fair enough for the first two, but I was thinking of the FrontierMath thing as mostly a critique of Epoch, not of OpenAI, tbc, and that’s the sense in which it mattered—Epoch made changes, afaik OpenAI did not. Epoch is at least an EA-adjacent project.
Sign seems pretty negative to me.
I agree that if I had to guess I’d say that the sign seems negative for both of the things you say it is negative for, but I am uncertain about it, particularly because of people standing behind a version of the critique (e.g. Habryka for the Nonlinear one, Alexander Berger for the Wytham Abbey one, though certainly in the latter case it’s a very different critique than what the original post said).
I think I stand by the claim that there aren’t many criticisms that clearly mattered, but this was a positive update for me.
Fwiw, I think there are probably several other criticisms that I alone could find given some more time, let alone impactful criticisms that I never even read. I didn’t even start looking for the genre of “critique of individual part of GiveWell cost-effectiveness analysis, which GiveWell then fixes”, I think there’s been at least one and maybe multiple such public criticisms in the past.
I also remember there being a StrongMinds critique and a Happier Lives Institute critique that very plausibly caused changes? But I don’t know the details and didn’t follow it
on
I agree with some of the points on point 1, though other than FTX, I don’t think the downside risk of any of those examples is very large. I’d walk back my claim to the downside risk to most EA projects seems low (but there are ofc exceptions).
on
Agree that criticisms of AI companies can be good, I don’t really consider them EA projects but it wasn’t clear that was what I was referring to in my post—my bad. Responding quickly to some of the other ones.
Concerns with Intentional Insights
This seems good, though it was a long time ago.
It’s hard to tell, but I’d guess Critiques of Prominent AI Safety Labs changed who applied to the critiqued organizations
Idk if these are “EA” projects. I think I’m much more pessimistic than you are that these posts made better things happen in the world. I’d guess that people overupdated on these somewhat. That said, I quite like these posts and the discussion in the commentts.
Gossip-based criticism of Leverage clearly mattered and imo it would have been better if it was more public
This also seems good, though it was a long time ago and I wasn’t around when leverage was a thing.
Sharing Information About Nonlinear clearly mattered in the sense of having some impact, though the sign is unclear
Sign seems pretty negative to me.
Same deal for Why did CEA buy Wytham Abbey?
Sign seems pretty negative to me. Like even the title is misleading and this generated a lot of drama.
Back in the era when EA discussions happened mainly on Facebook there were all sorts of critiques and flame wars between protest-tactics and incremental-change-tactics for animal advocacy, I don’t think this particularly changed what any given organization tried to do, but it surely changed views of individual people
Not familiar but maybe this is useful? Idk.
Open Phil and RP both had pieces that were pretty critical of clean meat work iirc that were large updates for me. I don’t think they were org-level critiques, but I could imagine a version of them being critiques of GFI.
So overall, I think I stand by the claim that there aren’t many criticisms that clearly mattered, but this was a positive update for me. Maybe I should have said that a very small fraction of critical EA forum posts have clear positive effects or give people useful information.
This was a great comment—thanks for writing it.
Fwiw I find it pretty plausible that lots of political action and movement building for the sake of movement building has indeed had a large negative impact, such that I feel uncertain about whether I should shut it all down if I had the option to do so (if I set aside concerns like unilateralism). I also feel similarly about particular examples of AI safety research but definitely not for the field as a whole.
Fair enough for the first two, but I was thinking of the FrontierMath thing as mostly a critique of Epoch, not of OpenAI, tbc, and that’s the sense in which it mattered—Epoch made changes, afaik OpenAI did not. Epoch is at least an EA-adjacent project.
I agree that if I had to guess I’d say that the sign seems negative for both of the things you say it is negative for, but I am uncertain about it, particularly because of people standing behind a version of the critique (e.g. Habryka for the Nonlinear one, Alexander Berger for the Wytham Abbey one, though certainly in the latter case it’s a very different critique than what the original post said).
Fwiw, I think there are probably several other criticisms that I alone could find given some more time, let alone impactful criticisms that I never even read. I didn’t even start looking for the genre of “critique of individual part of GiveWell cost-effectiveness analysis, which GiveWell then fixes”, I think there’s been at least one and maybe multiple such public criticisms in the past.
I also remember there being a StrongMinds critique and a Happier Lives Institute critique that very plausibly caused changes? But I don’t know the details and didn’t follow it