You’re explicitly branded as an EA organisation. When you’re communicating this answer to people, how are you going to handle the fact that different people in EA have very different views about the value of the future?
EAs seem to have different views about the value of the future in the sense that they disagree about population ethics (i.e. how to evaluate outcomes that differ in the numbers or the identities of the people involved). To my knowledge, there are no significant disagreements concerning time discounting (i.e. how much, if at all, to discount welfare on the basis of its temporal location). For example, I’m not aware of anyone who thinks that a LLIN distributed a year from now does less good than a LLIN distributed now because the welfare of the first recipient, by virtue of being more removed from the present, matters less than the welfare of the second recipient.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.
If I were an onlooker I might be thinking “hmm looks like these people are trying to settle difficult EA questions in certain positions and are going to advertise those as the correct answers when there is still a lot of unsettled debate”
I think a good answer to the prompt would acknowledge the debate in EA and that people have different views.
I ought to clarify: For the purposes we’ll be using our FAQ for we want to be outlining and defending our urgent longtermist view. That’s why in the prompt I’m looking for answers that fall on one particular side of the view (i.e. the side that best represents the views of our organisation and goals which are urgent longtermist) (if I weren’t doing this bounty I would just be writing an answer that fell on this side myself! And I’m looking to outsource my work here)
I think this is a very different set of goals and views that the EA movement as a whole, and we’re not trying to represent those—sorry for any confusion! I should have specified more clearly what our use case of the FAQ is. For example, I think this would probably be bad as a FAQ on EA.org.
I also think that a lot of these questions will be unsettled. Nevertheless for this bounty I want people to be able to indicate their tentative best guess answer to the question in a decision relevant way without getting caught in the failure mode of just providing a survey of different views.
I think that the valuable discussion and debate over the answers to the question should continue elsewhere :)
I have now made some small clarifications to the original post. If we decide to continue with the bounty program then I’ll try and do more clarifications to our aims and why we’re doing it this way :)
You’re explicitly branded as an EA organisation. When you’re communicating this answer to people, how are you going to handle the fact that different people in EA have very different views about the value of the future?
EAs seem to have different views about the value of the future in the sense that they disagree about population ethics (i.e. how to evaluate outcomes that differ in the numbers or the identities of the people involved). To my knowledge, there are no significant disagreements concerning time discounting (i.e. how much, if at all, to discount welfare on the basis of its temporal location). For example, I’m not aware of anyone who thinks that a LLIN distributed a year from now does less good than a LLIN distributed now because the welfare of the first recipient, by virtue of being more removed from the present, matters less than the welfare of the second recipient.
One can have a positive rate of (intergenerational) pure time preference for agent-relative reasons (see here). I’m actually less certain than you are (and than alexrjl is) that people don’t discount in this way. Indeed I think many people discount in a similar way spatially e.g. “I have obligations to help the homeless people in my town as they are right there”.
I think if EA wants to attract deontologists and virtue ethicists, we need to speak in their language and acknowledge arguments like this. Interestingly, the paper I linked to argues that discounting based on agent-relative reasons doesn’t allow one to escape longtermism as we can’t discount very much (I explain here). I’m not sure if a hardcore deontologist would be convinced by that, but I think that’s the route we’d have to go down when engaging with them.
Therefore I agree with alexjrl that we need to identify the crux of disagreements to know how best to respond. Optimal responses can take various forms.
Good question!
If I were an onlooker I might be thinking “hmm looks like these people are trying to settle difficult EA questions in certain positions and are going to advertise those as the correct answers when there is still a lot of unsettled debate”
I think a good answer to the prompt would acknowledge the debate in EA and that people have different views.
I ought to clarify: For the purposes we’ll be using our FAQ for we want to be outlining and defending our urgent longtermist view. That’s why in the prompt I’m looking for answers that fall on one particular side of the view (i.e. the side that best represents the views of our organisation and goals which are urgent longtermist) (if I weren’t doing this bounty I would just be writing an answer that fell on this side myself! And I’m looking to outsource my work here)
I think this is a very different set of goals and views that the EA movement as a whole, and we’re not trying to represent those—sorry for any confusion! I should have specified more clearly what our use case of the FAQ is. For example, I think this would probably be bad as a FAQ on EA.org.
I also think that a lot of these questions will be unsettled. Nevertheless for this bounty I want people to be able to indicate their tentative best guess answer to the question in a decision relevant way without getting caught in the failure mode of just providing a survey of different views.
I think that the valuable discussion and debate over the answers to the question should continue elsewhere :)
I have now made some small clarifications to the original post. If we decide to continue with the bounty program then I’ll try and do more clarifications to our aims and why we’re doing it this way :)