1) Could it be useful to distinguish between “causal uncertainty” and “non-causal uncertainty” about who (and how many) will exist?
Causal uncertainty would be uncertainty resulting from the fact that you as a decision maker have not yet decided what to do, yet where your actions will impact who will exist—a strange concept to wrap my around. Non-causal uncertainty would be uncertainty (about who will exist) that stems from uncertainty about how other forces will play out that are largely independent of your actions.
Getting to your post, I can see why one might discount based on non-causal uncertainty (see next point for more on this), but discounting based on causal uncertainty seems rather more bizarre and almost makes my head explode (though see this paper).
2) You claim in your first sentence that discounting based on space and time should be treated similarly to each other, and in particular that discounting based on either should be avoided. Thus it appears you claim that absent uncertainty, we should treat the present and future similarly; [if that last part didn’t quite follow see point 3 below]. If so, one can ask should we also treat uncertainty about who will eventually come into existence similarly to how we treat uncertainty about who currently exists? For an example of the latter, suppose there are an uncertain number of people trapped in a well: either 2 or 10 with 50-50 odds and we can take costly actions to save them. I think we would weight the possible 10 people with only 50% (and similarly the possible 2 people), so in that sense I think we would and should discount on uncertainty about who currently exists. If so and if we answer yes to the third sentence in this paragraph, we should also discount future people based on non-causal uncertainty.
3) Another possibility is to discount not based on time per se (which you reject) but rather on current existence, so that future people will be discounting or ignored until they exist at which point they get full value. A potential difficulty with this approach is that you could be sure that 1 billion people are going to be born in a barren desert next year and you would, then, have no (or at discounted) reason to bring food to that desert until they were born, at which point you would suddenly have a humanitarian crisis on your hands which you quite foreseeably failed to prepare for. [Admittedly people come into existence through a gradual process (e.g. 9 months) so it wouldn’t be quite a split-second change of priorities about whether to bring food, which might attenuate the force of this objection a bit.]
Interesting post.
I have three points/thoughts in response:
1) Could it be useful to distinguish between “causal uncertainty” and “non-causal uncertainty” about who (and how many) will exist?
Causal uncertainty would be uncertainty resulting from the fact that you as a decision maker have not yet decided what to do, yet where your actions will impact who will exist—a strange concept to wrap my around. Non-causal uncertainty would be uncertainty (about who will exist) that stems from uncertainty about how other forces will play out that are largely independent of your actions.
Getting to your post, I can see why one might discount based on non-causal uncertainty (see next point for more on this), but discounting based on causal uncertainty seems rather more bizarre and almost makes my head explode (though see this paper).
2) You claim in your first sentence that discounting based on space and time should be treated similarly to each other, and in particular that discounting based on either should be avoided. Thus it appears you claim that absent uncertainty, we should treat the present and future similarly; [if that last part didn’t quite follow see point 3 below]. If so, one can ask should we also treat uncertainty about who will eventually come into existence similarly to how we treat uncertainty about who currently exists? For an example of the latter, suppose there are an uncertain number of people trapped in a well: either 2 or 10 with 50-50 odds and we can take costly actions to save them. I think we would weight the possible 10 people with only 50% (and similarly the possible 2 people), so in that sense I think we would and should discount on uncertainty about who currently exists. If so and if we answer yes to the third sentence in this paragraph, we should also discount future people based on non-causal uncertainty.
3) Another possibility is to discount not based on time per se (which you reject) but rather on current existence, so that future people will be discounting or ignored until they exist at which point they get full value. A potential difficulty with this approach is that you could be sure that 1 billion people are going to be born in a barren desert next year and you would, then, have no (or at discounted) reason to bring food to that desert until they were born, at which point you would suddenly have a humanitarian crisis on your hands which you quite foreseeably failed to prepare for. [Admittedly people come into existence through a gradual process (e.g. 9 months) so it wouldn’t be quite a split-second change of priorities about whether to bring food, which might attenuate the force of this objection a bit.]