I’d encourage you to stop making these sorts of posts. I think they’re off-putting for people that might otherwise engage more with more reasonable EA ideas.
I strong downvoted this comment because I think this type of discourse censorship is terrible. Effective Altruism should be about figuring out how to do the most good, and then doing just that.
“This idea is off putting” can be use as a fully general counterargument against any new intervention or pivot. Helping farmed animals is off putting to many. Helping people abroad before helping those at home is off putting to many.
This is, by the way, not to say that you can’t dismiss an argument if the logic lead to absurd conclusions. Reasoning from first principals can be a dangerous activity if you take your ideas seriously (see e.g., epistemic learned helplessness, memedic immune system). But when trying to figure out how to do the most good, I think it’s really really bad to have any sort internal thought censors.
(I think it’s comparably better to consider “does this sound off putting” deciding what actions to take.)
Expressing uncomfortable truths is important when it’s useful, but these calculations are so riddled with uncertainty and so lacking in actionable conclusions that this post and posts like it are probably net harmful.
I think it’s reasonable to say that loudly pondering uncomfortable ideas is not useful if it returns an answer with error bars so wide that you might as well have not written the post at all.
I strong downvoted this comment because I think this type of discourse censorship is terrible. Effective Altruism should be about figuring out how to do the most good, and then doing just that.
“This idea is off putting” can be use as a fully general counterargument against any new intervention or pivot. Helping farmed animals is off putting to many. Helping people abroad before helping those at home is off putting to many.
This is, by the way, not to say that you can’t dismiss an argument if the logic lead to absurd conclusions. Reasoning from first principals can be a dangerous activity if you take your ideas seriously (see e.g., epistemic learned helplessness, memedic immune system). But when trying to figure out how to do the most good, I think it’s really really bad to have any sort internal thought censors.
(I think it’s comparably better to consider “does this sound off putting” deciding what actions to take.)
I think it’s reasonable to say that loudly pondering uncomfortable ideas is not useful if it returns an answer with error bars so wide that you might as well have not written the post at all.