The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.
I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
You’re right that the issue at its core isn’t the meat eater problem. The bigger issue is that we don’t even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don’t even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.
I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
You’re right that the issue at its core isn’t the meat eater problem. The bigger issue is that we don’t even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don’t even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.