EAs seem to have different views about the value of the future in the sense that they disagree about population ethics (i.e. how to evaluate outcomes that differ in the numbers or the identities of the people involved). To my knowledge, there are no significant disagreements concerning time discounting (i.e. how much, if at all, to discount welfare on the basis of its temporal location). For example, I’m not aware of anyone who thinks that a LLIN distributed a year from now does less good than a LLIN distributed now because the welfare of the first recipient, by virtue of being more removed from the present, matters less than the welfare of the second recipient.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.
EAs seem to have different views about the value of the future in the sense that they disagree about population ethics (i.e. how to evaluate outcomes that differ in the numbers or the identities of the people involved). To my knowledge, there are no significant disagreements concerning time discounting (i.e. how much, if at all, to discount welfare on the basis of its temporal location). For example, I’m not aware of anyone who thinks that a LLIN distributed a year from now does less good than a LLIN distributed now because the welfare of the first recipient, by virtue of being more removed from the present, matters less than the welfare of the second recipient.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.