the other is that the particular style in which the EA community pursues that idea (looking for interventions with robust academic evidence of efficacy, and then supporting organizations implementing those interventions that accountably have a high amount of intervention per marginal dollar) is novel, but mostly because the cultural background for it seeming possible as an option at all is new.
The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn’t seem different in kind to the evidence probably available earlier in history. Even in the best cases, EAs often have to lean on a combination of more rigorous evidence and some not very rigorous or evidenced guesses about how indirect effects work out etc. So if the more rigorous evidence available were substantially less rigorous than it is, I think I would expect things to look pretty much the same, with us just having lower standards—e.g. only being willing to trust certain people’s reports of how interventions were going. So I’m not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.
My other point was that EA isn’t new, but that we don’t recognize earlier attempts because they’re not really doing evidence in a way that we would recognize.
I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren’t really on the table yet.
The kinds of evidence available for some EA interventions, e.g. existential risk ones, doesn’t seem different in kind to the evidence probably available earlier in history. Even in the best cases, EAs often have to lean on a combination of more rigorous evidence and some not very rigorous or evidenced guesses about how indirect effects work out etc. So if the more rigorous evidence available were substantially less rigorous than it is, I think I would expect things to look pretty much the same, with us just having lower standards—e.g. only being willing to trust certain people’s reports of how interventions were going. So I’m not convinced that some recently attained level of good evidence has much to do with the overall phenomenon of EA.
My other point was that EA isn’t new, but that we don’t recognize earlier attempts because they’re not really doing evidence in a way that we would recognize.
I also think that x-risk was basically not something that many people would worry about until after WWII. Prior to WWII there was not much talk of global warming, and AI, genetic engineering, nuclear war weren’t really on the table yet.
Effective altruism as a social movement emerged as the confluence of clusters of non-profit organizations based out of San Francisco, New York, and Oxford