Stylistically, some commenters don’t seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there’s a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.
As for why the latter is a problem, I agree with JWS’s observation that this type of ‘For The Greater Good’ reasoning leads to great harm when applied at scale. This is not, or rather should not be, hypothetical for EA at this point. No amount of abstract reasoning for why this approach is ‘better’ is going to outweigh what seems to me to be very clear empirical evidence to the contrary, both within EA and without.
Beyond that issue, it’s pretty easy to identify any person, grant, or policy as plausibly-very-harmful if you focus only on possible negative side effects, so you end up with motivated reasoning driving the answers for what to do.
For example, in this post Vasco recommends:
In addition, I encourage people there to take uncertainty seriously, and, before significant further investigation, only support interventions which are beneficial in the nearterm accounting for effects on farmed animals.
But why stop at farmed animals? What about wild animals, especially insects? What about the long-term future? If taking Expected Total Hedonistic Utilitarianism seriously as Vasco does, I expect these effects to dominate farmed animals. My background understanding is that population increase leads to cultivation of land for farming and reduces wild animal populations and so wild animal suffering quite a bit.. So I could equivalently argue:
In addition, I encourage Vasco to take uncertainty seriously, and, before significant further investigation, only support interventions which are beneficial in the nearterm accounting for effects on wild animals.
These would then tend to be the opposite set of interventions to the prior set. It just goes round and round. I think there are roughly two reasonable approcahes here:
Pick something that seems like a clear good - ‘save lives’, ‘end factory farming’, ‘save the world’ - and try to make it happen without tying yourself into knots about side-effects.
Really just an extension of (1), but if you come across a side effect that worries you, add that goal as a second terminal goal and split your resources between the goals.
By contrast, if your genuine goal is to pick an intervention with no plausible chance of causing significant harm, and you are being honest with yourself about possible backfires, you will do nothing.
Stylistically, some commenters don’t seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there’s a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.
As for why the latter is a problem, I agree with JWS’s observation that this type of ‘For The Greater Good’ reasoning leads to great harm when applied at scale. This is not, or rather should not be, hypothetical for EA at this point. No amount of abstract reasoning for why this approach is ‘better’ is going to outweigh what seems to me to be very clear empirical evidence to the contrary, both within EA and without.
Beyond that issue, it’s pretty easy to identify any person, grant, or policy as plausibly-very-harmful if you focus only on possible negative side effects, so you end up with motivated reasoning driving the answers for what to do.
For example, in this post Vasco recommends:
But why stop at farmed animals? What about wild animals, especially insects? What about the long-term future? If taking Expected Total Hedonistic Utilitarianism seriously as Vasco does, I expect these effects to dominate farmed animals. My background understanding is that population increase leads to cultivation of land for farming and reduces wild animal populations and so wild animal suffering quite a bit.. So I could equivalently argue:
These would then tend to be the opposite set of interventions to the prior set. It just goes round and round. I think there are roughly two reasonable approcahes here:
Pick something that seems like a clear good - ‘save lives’, ‘end factory farming’, ‘save the world’ - and try to make it happen without tying yourself into knots about side-effects.
Really just an extension of (1), but if you come across a side effect that worries you, add that goal as a second terminal goal and split your resources between the goals.
By contrast, if your genuine goal is to pick an intervention with no plausible chance of causing significant harm, and you are being honest with yourself about possible backfires, you will do nothing.