One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.