hm, at a minimum: moving lots of money, and making a big impact on the discussion around ai risk, and probably also making a pretty big impact on animal welfare advocacy.
My loose understanding of farmed animal advocacy is that something like half the money, and most of the leaders, are EA-aligned or EA-adjacent. And the moral value of their $s is very high. Like you just see wins after wins every year, on a total budget across the entire field on the order of tens of millions.
To be clear, from my perspective what I said is moderate but not strong evidence that EA is counterfactual for said wins. I don’t know enough about the details to be particularly confident.
A lot of organisations with totally awful ideas and norms have nonethless ended up moving lots of money and persuading a lot of people. You can insert your favourite punching bag pseudoscience movement or bad political party here. The OP is not saying that the norms of EA are worse than those organisations, just that they’re not as good as they could be.
We should absolutely not be sure, for example because the discussion around AI risk up to date has probably accelerated rather than decelerated AI timelines. I’m most keen on seeing empirical work around figuring out whether longtermist EA has been net positive so far (and a bird’s eye, outside view, analysis of whether we’re expected to be positive in the future). Most of the procedural criticisms and scandals are less important in comparison.
Relevant thoughts here include self-effacing ethical theories and Nuño’s comment here.
In what ways is EA very successful? Especially if you go outside the area of global health?
hm, at a minimum: moving lots of money, and making a big impact on the discussion around ai risk, and probably also making a pretty big impact on animal welfare advocacy.
My loose understanding of farmed animal advocacy is that something like half the money, and most of the leaders, are EA-aligned or EA-adjacent. And the moral value of their $s is very high. Like you just see wins after wins every year, on a total budget across the entire field on the order of tens of millions.
I’m glad to hear that. I’ve been very happy about the successes of animal advocacy, but hadn’t imagined EA had such a counterfactual impact in it.
To be clear, from my perspective what I said is moderate but not strong evidence that EA is counterfactual for said wins. I don’t know enough about the details to be particularly confident.
A lot of organisations with totally awful ideas and norms have nonethless ended up moving lots of money and persuading a lot of people. You can insert your favourite punching bag pseudoscience movement or bad political party here. The OP is not saying that the norms of EA are worse than those organisations, just that they’re not as good as they could be.
Are we at all sure that these have had, or will have, a positive impact?
We should absolutely not be sure, for example because the discussion around AI risk up to date has probably accelerated rather than decelerated AI timelines. I’m most keen on seeing empirical work around figuring out whether longtermist EA has been net positive so far (and a bird’s eye, outside view, analysis of whether we’re expected to be positive in the future). Most of the procedural criticisms and scandals are less important in comparison.
Relevant thoughts here include self-effacing ethical theories and Nuño’s comment here.