You probably didn’t have someone like me in mind when you wrote this, but it seems a good opportunities to write down some of my thoughts about EA.
On 1, I think despite paying lip service to moral uncertainty, EA encourages too much certainty in the normative correctness of altruism (and more specific ideas like utilitarianism), perhaps attracting people like SBF with too much philosophical certainty in general (such as about how much risk aversion is normative), or even causing such general overconfidence (by implying that philosophical questions in general aren’t that hard to answer, or by suggesting how much confidence is appropriate given a certain amount of argumentation/reflection).
I think EA also encourages too much certainty in descriptive assessment of people’s altruism, e.g., viewing a philanthropic action or commitment as directly virtuous, instead of an instance of virtue signaling (that only gives probabilistic information about someone’s true values/motivations, and that has to be interpreted through the lenses of game theory and human psychology).
On 25, I think the “safe option” is to give people information/arguments in a non-manipulative way and let them make up their own minds. If some critics are using things like social pressure or rhetoric to manipulate people into being anti-EA (as you seem to implying—I haven’t looked into it myself), then that seems bad on their part.
On 37, where has EA messaging emphasized downside risk more? A text search for “downside” and “risk” on https://www.effectivealtruism.org/articles/introduction-to-effective-altruism both came up empty, for example. In general it seems like there has been insufficient reflection on SBF and also AI safety (where EA made some clear mistakes, e.g. with OpenAI, and generally contributed to the current AGI race in a potentially net negative way, but seem to have produced no public reflections on these topics).
On 39, seeing statements like this (which seems overconfident to me) makes me more worried about EA, similar to how my concern about each AI company is inversely related to how optimistic it is about AI safety.
You probably didn’t have someone like me in mind when you wrote this, but it seems a good opportunities to write down some of my thoughts about EA.
On 1, I think despite paying lip service to moral uncertainty, EA encourages too much certainty in the normative correctness of altruism (and more specific ideas like utilitarianism), perhaps attracting people like SBF with too much philosophical certainty in general (such as about how much risk aversion is normative), or even causing such general overconfidence (by implying that philosophical questions in general aren’t that hard to answer, or by suggesting how much confidence is appropriate given a certain amount of argumentation/reflection).
I think EA also encourages too much certainty in descriptive assessment of people’s altruism, e.g., viewing a philanthropic action or commitment as directly virtuous, instead of an instance of virtue signaling (that only gives probabilistic information about someone’s true values/motivations, and that has to be interpreted through the lenses of game theory and human psychology).
On 25, I think the “safe option” is to give people information/arguments in a non-manipulative way and let them make up their own minds. If some critics are using things like social pressure or rhetoric to manipulate people into being anti-EA (as you seem to implying—I haven’t looked into it myself), then that seems bad on their part.
On 37, where has EA messaging emphasized downside risk more? A text search for “downside” and “risk” on https://www.effectivealtruism.org/articles/introduction-to-effective-altruism both came up empty, for example. In general it seems like there has been insufficient reflection on SBF and also AI safety (where EA made some clear mistakes, e.g. with OpenAI, and generally contributed to the current AGI race in a potentially net negative way, but seem to have produced no public reflections on these topics).
On 39, seeing statements like this (which seems overconfident to me) makes me more worried about EA, similar to how my concern about each AI company is inversely related to how optimistic it is about AI safety.