A better alternative is to recognize that our own future selves, and our descendants, will be able to “debug” the unpredictable consequences of the actions we take and systems we create. They can do this by creating sustainable alternatives, building resiliency, and improving their planning and evaluation. They will be motivated by self-interest to do so, and enabled by their increasing knowledge. [emphasis mine]
This point doesn’t hold in the case of animal welfare. This might seem like a minor nitpick on my part, but for EAs who prioritize animal welfare yet are also concerned about long-term effects, it’s a pretty crucial thing to note. Indeed I’d suspect that taking an approach of going with what seems best right now (without more thoroughly investigating the long-term consequences that we could in principle discover upon reflection) could harm the reputation of animal welfare activism, because this would seem especially reckless given that animals aren’t in a position to save themselves from the negative consequences of our choices.
An analogous point holds more weakly even for human-centric causes, I think. Just because future humans will be in a position to debug interventions we make in the present, that doesn’t make it prudent for us to neglect the work of considering the (often conflicting) long-term effects that we could identify if we worked harder. I worry that this attitude places a burden on future people that they didn’t ask for, unless I’m misunderstanding your general claim.
This point doesn’t hold in the case of animal welfare. This might seem like a minor nitpick on my part, but for EAs who prioritize animal welfare yet are also concerned about long-term effects, it’s a pretty crucial thing to note. Indeed I’d suspect that taking an approach of going with what seems best right now (without more thoroughly investigating the long-term consequences that we could in principle discover upon reflection) could harm the reputation of animal welfare activism, because this would seem especially reckless given that animals aren’t in a position to save themselves from the negative consequences of our choices.
An analogous point holds more weakly even for human-centric causes, I think. Just because future humans will be in a position to debug interventions we make in the present, that doesn’t make it prudent for us to neglect the work of considering the (often conflicting) long-term effects that we could identify if we worked harder. I worry that this attitude places a burden on future people that they didn’t ask for, unless I’m misunderstanding your general claim.