I’m not so sure about that. The link above argues longtermism may evade cluelessness (which I also discuss here) and I provide some additional thoughts on cause areas that may evade the cluelessness critique here.
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it’s pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
Just to make sure I understand your position—are you saying that the cluelessness critique is valid and that it affects all altruistic actions? So Effective Altruism and altruism generally are doomed enterprises?
I don’t buy that we are clueless about all actions. For example, I would say that something like expanding our moral circle to all sentient beings is robustly good in expectation. You can of course come up with stories about why it might be bad, but these stories won’t be as forceful as the overall argument that a world that considers the welfare of all beings (that have the capacity for welfare) important is likely better than one that doesn’t.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.
I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
You’re right that the issue at its core isn’t the meat eater problem. The bigger issue is that we don’t even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don’t even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.
I’m not so sure about that. The link above argues longtermism may evade cluelessness (which I also discuss here) and I provide some additional thoughts on cause areas that may evade the cluelessness critique here.
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it’s pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
Just to make sure I understand your position—are you saying that the cluelessness critique is valid and that it affects all altruistic actions? So Effective Altruism and altruism generally are doomed enterprises?
I don’t buy that we are clueless about all actions. For example, I would say that something like expanding our moral circle to all sentient beings is robustly good in expectation. You can of course come up with stories about why it might be bad, but these stories won’t be as forceful as the overall argument that a world that considers the welfare of all beings (that have the capacity for welfare) important is likely better than one that doesn’t.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
I see where Greaves is coming from with the longtermist argument. One way to avoid the complex cluelessness she describes is to ensure the direct/intended expected impact of your intervention is sufficiently large so as to swamp the (forseeable) indirect expected impacts. Longtermist interventions target astronomical / very large value, so they can in theory meet this standard.
I’m not claiming all longtermist interventions avoid the cluelessness critique. I do think you need to consider interventions on a case-by-case basis. But I think there are some fairly general things we can say. For example, the issue with global health interventions is that they pretty much all involve increasing the consumption of animals by saving human lives, so you have a negative impact there which is hard to weigh against the benefits of saving a life. You don’t have this same issue with animal welfare interventions.
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
You’re right that the issue at its core isn’t the meat eater problem. The bigger issue is that we don’t even know if saving lives now will increase or decrease future populations (there are difficult arguments on both sides). If we don’t even know that, then we are going to be at a complete loss to try to conduct assessments on animal welfare and climate change, even though we know there are going to be important impacts here.