This seems great—I’d love to see it completed, polished a bit, and possibly published somewhere. (If you’re interested in more feedback on that process, feel free to ping me.)
Davidmanheim
I certainly agree it’s some marginal evidence of propensity, and that the outcome, not the intent, is what matters—but don’t you think that mistakes become less frequent with greater understanding and capacity?
Agreed on impacts—but I think intention matters when considering what the past implies about the future, and as I said in another reply, on that basis I will claim the great leap forward isn’t a reasonable basis to predict future abuse or tragedy.
Thanks for writing and posting this!
I think it’s important to say this because people often over-update on the pushback to things that they hear about, because of visible second order effects, but they don’t notice the counterfactual is the thing in question not happening, which far outweighs those real but typically comparatively minor problems created.
Not to answer the question, but to add a couple links that I know you’re aware of but didn’t explicitly mention, there are two reasons that EA does better than most groups. First, the fact that EA is adjacent to and overlaps with the lesswrong-style rationality community, and the multiple years of texts on better probabilistic reasoning and why and how to reason more explicitly had a huge impact. And second, the similarly adjacent forecasting community, which was kickstarted in a real sense by people affiliated with FHI (Matheny and IARPA, Robin Hanson, and Tetlock’s later involvement.)
Both of these communities have spent time thinking about better probabilistic reasoning, and have lots of things to say about not just the issue of thinking probabilistically in general instead of implicitly asserting certainty based on which side of 50% things are. And many in EA, including myself, have long-advocated the ideas being even more centrally embraced in EA discussions. (Especially because I will claim that the concerns of the rationality community keep being relevant to EA’s failures, or being prescient of later-embraced EA concerns and ideas.)
Do you have any reason to think, or evidence, that the claimed downvoting occurred?
I think (tentatively) that making (even giant and insanely consequential) mistakes with positive intentions, like the great leap forward, is in a meaningful sense far less bad than mistakes that are more obviously aimed at cynical self benefit at the expense of others, like, say, most of US foreign policy in South America, or post-civil-war policy related to segregation.
Better Games or Bednets?
I agree about that.
Wait, did you want them to “denounce” the choice of shutting down USAID, or the individual?
Have you read Drexler’s CAIS proposal?
I think the discussions in comments on the forum are probably a better way to get feedback and develop parts of the skill, though obviously long form writing is an additional skill.
Yeah, I agree that the title is part of your view, but I think your view is very poorly summarized by the title.
Historically, I’d disagree. And I’m not confident the change away from that is persisting.
Thank you for this post!
I broadly agree with your view, but think I strongly disagree with the conclusion. There seem to be lots of worlds where having some small percentage of total EA focus include this area pays off hugely. So while I agree it’s not the highest impact area, because of synergies, it seems somewhat likely for it to be part of the highest impact portfolio.
This post seems broadly correct, but poorly titled, in that it concludes something very different than the title does.
Seconded the point that it’s a good discussion to have. Very closely related to my original point, I don’t think downvoting this is helpful—it’s good to have public discussion, even if I think the framing about “EA” denouncing things is confused.
I don’t think it’s “politically wise” to be associated with someone like Musk
This grossly misconstrues what I said.
Elon has directly attacked every value I hold dear, and has directly screwed over life-saving aid to the third world. He is an enemy of effective altruist principles, and I don’t think we should be ashamed to loudly and openly say so.
I basically agree, personally, and think you missed my point.
In my personal view, there was a tremendous failure to capitalize on the crisis by global health security organizations, which were focused on stopping spread, but waited until around mid 2021 to start looking past COVID. This was largely a capacity issue, but it was also a strategic failure, and by the time anyone was seriously looking at things like the pandemic treaty, the window had closed.