I agree with some of the points on point 1, though other than FTX, I don’t think the downside risk of any of those examples is very large
Fwiw I find it pretty plausible that lots of political action and movement building for the sake of movement building has indeed had a large negative impact, such that I feel uncertain about whether I should shut it all down if I had the option to do so (if I set aside concerns like unilateralism). I also feel similarly about particular examples of AI safety research but definitely not for the field as a whole.
Agree that criticisms of AI companies can be good, I don’t really consider them EA projects but it wasn’t clear that was what I was referring to in my post
Fair enough for the first two, but I was thinking of the FrontierMath thing as mostly a critique of Epoch, not of OpenAI, tbc, and that’s the sense in which it mattered—Epoch made changes, afaik OpenAI did not. Epoch is at least an EA-adjacent project.
Sign seems pretty negative to me.
I agree that if I had to guess I’d say that the sign seems negative for both of the things you say it is negative for, but I am uncertain about it, particularly because of people standing behind a version of the critique (e.g. Habryka for the Nonlinear one, Alexander Berger for the Wytham Abbey one, though certainly in the latter case it’s a very different critique than what the original post said).
I think I stand by the claim that there aren’t many criticisms that clearly mattered, but this was a positive update for me.
Fwiw, I think there are probably several other criticisms that I alone could find given some more time, let alone impactful criticisms that I never even read. I didn’t even start looking for the genre of “critique of individual part of GiveWell cost-effectiveness analysis, which GiveWell then fixes”, I think there’s been at least one and maybe multiple such public criticisms in the past.
I also remember there being a StrongMinds critique and a Happier Lives Institute critique that very plausibly caused changes? But I don’t know the details and didn’t follow it
Fwiw I find it pretty plausible that lots of political action and movement building for the sake of movement building has indeed had a large negative impact, such that I feel uncertain about whether I should shut it all down if I had the option to do so (if I set aside concerns like unilateralism). I also feel similarly about particular examples of AI safety research but definitely not for the field as a whole.
Fair enough for the first two, but I was thinking of the FrontierMath thing as mostly a critique of Epoch, not of OpenAI, tbc, and that’s the sense in which it mattered—Epoch made changes, afaik OpenAI did not. Epoch is at least an EA-adjacent project.
I agree that if I had to guess I’d say that the sign seems negative for both of the things you say it is negative for, but I am uncertain about it, particularly because of people standing behind a version of the critique (e.g. Habryka for the Nonlinear one, Alexander Berger for the Wytham Abbey one, though certainly in the latter case it’s a very different critique than what the original post said).
Fwiw, I think there are probably several other criticisms that I alone could find given some more time, let alone impactful criticisms that I never even read. I didn’t even start looking for the genre of “critique of individual part of GiveWell cost-effectiveness analysis, which GiveWell then fixes”, I think there’s been at least one and maybe multiple such public criticisms in the past.
I also remember there being a StrongMinds critique and a Happier Lives Institute critique that very plausibly caused changes? But I don’t know the details and didn’t follow it