Sorry, perhaps I should have been more clear. I think there are two core issues:
Honesty:
Promoting a intervention based on analysis X (e.g. neonatal health) when the actual reason you believe in it is Y (animal welfare), and being intentionally misleading about this fact.
Flow-through:
Analyzing in depth one positive flow-through effect (animal welfare) but not other flow through effects that plausibly are very large and negative (e.g. existential risk, wild animal welfare, growth, population ethics). (Unless there is unpublished research on this subject, hence my asking).
I don’t have any principled objection to doing unpopular things; there are many EAs doing potentially unpopular things in an epistemically principled way without actively misleading people.
in a contrived, remote way, these could lead to people slipping in implausible, extremely negative associations
It’s not a crux for me so I don’t want to go to deeply into this point, but I don’t think it is that implausible that normal people might connect ‘we want to reduce populations in the third world’ with ‘eugenics’.
Do anyone here think that a full analysis of (checks notes) “existential risks, wild animal suffering, or long run growth, or population ethics”, especially to the degree that it would satisfy EA forum discussion norms, is a going to be practical use of time, when they could create more charities?
Yes, if it is plausible that an analysis could suggest that, all things considered, creating a specific charity is a bad idea, then some of the analysis you do before creating that charity should be on those considerations. Or, you could skip the analysis and create some other charity that does not require such research.
Sorry, perhaps I should have been more clear. I think there are two core issues:
Honesty:
Promoting a intervention based on analysis X (e.g. neonatal health) when the actual reason you believe in it is Y (animal welfare), and being intentionally misleading about this fact.
Flow-through:
Analyzing in depth one positive flow-through effect (animal welfare) but not other flow through effects that plausibly are very large and negative (e.g. existential risk, wild animal welfare, growth, population ethics). (Unless there is unpublished research on this subject, hence my asking).
I don’t have any principled objection to doing unpopular things; there are many EAs doing potentially unpopular things in an epistemically principled way without actively misleading people.
It’s not a crux for me so I don’t want to go to deeply into this point, but I don’t think it is that implausible that normal people might connect ‘we want to reduce populations in the third world’ with ‘eugenics’.
Yes, if it is plausible that an analysis could suggest that, all things considered, creating a specific charity is a bad idea, then some of the analysis you do before creating that charity should be on those considerations. Or, you could skip the analysis and create some other charity that does not require such research.