First, I should note that it doesn’t really make sense to model the rate of changes in opportunities as part of the discount rate. Future utility doesn’t become less valuable due to changes in opportunities; rather, money becomes less (or more) at producing utility.
I agree with the latter sentence. But isn’t basically the same thing true for the other factors you discuss (everything except pure time preference)? It seems like all of those factors are about how effectively we can turn money into utility, rather than about the value of future utility. And is that really a reason that it doesn’t make sense to include those factors in the “discount rate” (as opposed to the “pure time discounting rate”)?
As you write:
But even if we do not admit any pure time preference, we may still discount the value of future resources for four core reasons:
[...]
Or perhaps, given the text that follows the “First, I should note” passage, you really meant to be talking about something like how changes in opportunities may often be caused by donations themselves, rather than something that exogenously happens over time?
2.
Over a sufficiently long time horizon, it seems our estimate will surely converge on the true discount rate, even if we don’t invest much in figuring it out.
Could you explain why you say this? Is it a generalised notion that humanity will converge on true beliefs about all things, if given enough time? (If so, I find it hard to see why we should be confident of that, as it seems there could also be stasis or more Darwinian dynamics.) Or is there some specific reason to suspect convergence on the truth regarding discount rates in particular?
3.
Arguably, existential risk matters a lot more than value drift. Even in the absence of any philanthropic intervention, people generally try to make life better for themselves. If humanity does not go extinct, a philanthropist’s values might eventually actualize, depending on their values and on the direction humanity takes. Under most (but not all) plausible value systems and beliefs about the future direction of humanity, existential risk looks more important than value drift. The extent to which it looks more important depends on how much better one expects the future world to be (conditional on non-extinction) with philanthropic intervention than with its default trajectory.
I think these are important points. I’ve collected some relevant “crucial questions” and sources in my draft series on Crucial questions for longtermists. E.g., in relation to the question “How close to optimal would trajectories be “by default” (assuming no existential catastrophe)?” It’s possible you or other readers would find that draft post, or the sources linked to from it, interesting (and I’d also welcome feedback).
4.
Such events do not existentially threaten one’s financial position, so they should not be considered as part of the expropriation rate for our purposes.
Could you explain why we should only consider things that could wipe out one’s assets, rather than things that result in loss of “some but not all” of one’s assets, in the expropriation rate for our purposes? Is it something to do with the interest rate already being boosted upwards to account for risks of losing some but not all of one’s assets, but for some reason not being boosted upwards to account for events that wipe out one’s assets? If so, could you explain why that would be the case.
(This may be a naive question; I lack a background in econ, finance, etc. Feel free to just point me to a Wikipedia article or whatever.)
5.
Observe that even when assets are distributed across multiple funds, expropriation and value drift still reduce the expected rate of return on investments in a way that looking at historical market returns does not account for. This is a good trade—decreasing the discount rate and decreasing the investment rate by the same amount probably increases utility in most situations
I didn’t understand these sentences. If you think you’d be able to explain them without too much effort, I’d appreciate that. (But no worries if not—my confusion may just reflect my lack of relevant background, which you’re not obliged to make up for!)
Thanks for the comments! I will respond to each of your numbered points.
The possibility of, say, extinction is a discount on utility, not on money. To see this, we can extend the formula for utility at time t. Suppose there are two possibilities for the future: extinction and non-extinction. The probability that we end up in the non-extinction world is e−δt, so the expected utility due to non-extinction is e−δtu(c(t)). We could also add to this the utility of extinction world, call it uE. Then total expected utility is (1−e−δt)uE+e−δtu(c(t)).
Then, we can say uE=0 to get the formula used in my essay. Or we can just say that we should ignore the uE term because there’s nothing we can do to change it (that’s assuming δ is not changeable, which is obviously not true in real life, but it’s true in the standard Ramsey model).
This wasn’t a particularly well-thought out statement, but it was basically on the assumption that we should converge on true beliefs over time.
Thanks for the link!
If you dig into this a little more, it becomes apparent that the Ramsey model with constant relative risk aversion doesn’t really make sense. In theory, people should accept a zero probability of losing all their assets, because that would result in negative infinity utility. But in practice, some small probability is acceptable, and in fact unavoidable. And people don’t try to get the probability of bankruptcy as low as possible, either.
But according to the theoretical model, asset prices move according to geometric Brownian motion, which means they can never go to 0. Therefore, losing all your assets is a distinct thing from assets having a negative return, and it has to happen due to some special event that’s not part of normal asset price changes. I realize this is kind of hand-wavey, but this is a commonly-used model in economics so at least I have good company in my handwaviness.
Example: Suppose δ=2. Say we have the chance to change this to δ=1. We would accept that deal, because decreasing δ has a bigger effect on utility than decreasing r. (You can construct situations where this is false, but it’s usually true.)
The possibility of, say, extinction is a discount on utility, not on money
By that, do you mean that extinction makes future utility less valuable? Or that it means there may be less future utility (because there are no humans to experience utility), for reasons unrelated to how effectively money can create utility?
(Sorry if this is already well-explained by your equations.)
2.
it was basically on the assumption that we should converge on true beliefs over time.
I think my quick take would be that that’s a plausible assumption, and that I definitely expect convergence towards the truth on average across areas, but that there seems a non-trivial chance of indefinitely failing to land on the truth itself in a given area. If that quick take is a reasonable one, then I think this might push slightly more in favour of work to estimate the philanthropic discount rate, as it means we’d have less reason to expect humanity to work it out eventually “by default”.
4. To check I roughly understood, is the following statement approximately correct? “The chance of events that leave one with no assets at all can’t be captured in the standard theoretical model, so we have to use a separate term for it, which is the expropriation rate. Whereas the chance of events that result in the loss of some but not all of one’s assets is already captured in the standard theoretical model, so we don’t include it in the expropriation rate.”
Future utility is not less valuable, but the possibility of extinction means there is a chance that future utility will not actualize, so we should discount the future based on this chance.
That’s pretty much right. I would add that another reason why complete loss of capital is “special” is because it is possible to recover from any non-complete loss via sufficiently high investing returns. But if you have $0, no matter how good a return you get, you’ll still have $0.
Miscellaneous thoughts and questions
1.
I agree with the latter sentence. But isn’t basically the same thing true for the other factors you discuss (everything except pure time preference)? It seems like all of those factors are about how effectively we can turn money into utility, rather than about the value of future utility. And is that really a reason that it doesn’t make sense to include those factors in the “discount rate” (as opposed to the “pure time discounting rate”)?
As you write:
Or perhaps, given the text that follows the “First, I should note” passage, you really meant to be talking about something like how changes in opportunities may often be caused by donations themselves, rather than something that exogenously happens over time?
2.
Could you explain why you say this? Is it a generalised notion that humanity will converge on true beliefs about all things, if given enough time? (If so, I find it hard to see why we should be confident of that, as it seems there could also be stasis or more Darwinian dynamics.) Or is there some specific reason to suspect convergence on the truth regarding discount rates in particular?
3.
I think these are important points. I’ve collected some relevant “crucial questions” and sources in my draft series on Crucial questions for longtermists. E.g., in relation to the question “How close to optimal would trajectories be “by default” (assuming no existential catastrophe)?” It’s possible you or other readers would find that draft post, or the sources linked to from it, interesting (and I’d also welcome feedback).
4.
Could you explain why we should only consider things that could wipe out one’s assets, rather than things that result in loss of “some but not all” of one’s assets, in the expropriation rate for our purposes? Is it something to do with the interest rate already being boosted upwards to account for risks of losing some but not all of one’s assets, but for some reason not being boosted upwards to account for events that wipe out one’s assets? If so, could you explain why that would be the case.
(This may be a naive question; I lack a background in econ, finance, etc. Feel free to just point me to a Wikipedia article or whatever.)
5.
I didn’t understand these sentences. If you think you’d be able to explain them without too much effort, I’d appreciate that. (But no worries if not—my confusion may just reflect my lack of relevant background, which you’re not obliged to make up for!)
Thanks for the comments! I will respond to each of your numbered points.
The possibility of, say, extinction is a discount on utility, not on money. To see this, we can extend the formula for utility at time t. Suppose there are two possibilities for the future: extinction and non-extinction. The probability that we end up in the non-extinction world is e−δt, so the expected utility due to non-extinction is e−δtu(c(t)). We could also add to this the utility of extinction world, call it uE. Then total expected utility is (1−e−δt)uE+e−δtu(c(t)).
Then, we can say uE=0 to get the formula used in my essay. Or we can just say that we should ignore the uE term because there’s nothing we can do to change it (that’s assuming δ is not changeable, which is obviously not true in real life, but it’s true in the standard Ramsey model).
This wasn’t a particularly well-thought out statement, but it was basically on the assumption that we should converge on true beliefs over time.
Thanks for the link!
If you dig into this a little more, it becomes apparent that the Ramsey model with constant relative risk aversion doesn’t really make sense. In theory, people should accept a zero probability of losing all their assets, because that would result in negative infinity utility. But in practice, some small probability is acceptable, and in fact unavoidable. And people don’t try to get the probability of bankruptcy as low as possible, either.
But according to the theoretical model, asset prices move according to geometric Brownian motion, which means they can never go to 0. Therefore, losing all your assets is a distinct thing from assets having a negative return, and it has to happen due to some special event that’s not part of normal asset price changes. I realize this is kind of hand-wavey, but this is a commonly-used model in economics so at least I have good company in my handwaviness.
Example: Suppose δ=2. Say we have the chance to change this to δ=1. We would accept that deal, because decreasing δ has a bigger effect on utility than decreasing r. (You can construct situations where this is false, but it’s usually true.)
Thanks for this reply!
1.
By that, do you mean that extinction makes future utility less valuable? Or that it means there may be less future utility (because there are no humans to experience utility), for reasons unrelated to how effectively money can create utility?
(Sorry if this is already well-explained by your equations.)
2.
I think my quick take would be that that’s a plausible assumption, and that I definitely expect convergence towards the truth on average across areas, but that there seems a non-trivial chance of indefinitely failing to land on the truth itself in a given area. If that quick take is a reasonable one, then I think this might push slightly more in favour of work to estimate the philanthropic discount rate, as it means we’d have less reason to expect humanity to work it out eventually “by default”.
4. To check I roughly understood, is the following statement approximately correct? “The chance of events that leave one with no assets at all can’t be captured in the standard theoretical model, so we have to use a separate term for it, which is the expropriation rate. Whereas the chance of events that result in the loss of some but not all of one’s assets is already captured in the standard theoretical model, so we don’t include it in the expropriation rate.”
Future utility is not less valuable, but the possibility of extinction means there is a chance that future utility will not actualize, so we should discount the future based on this chance.
That’s pretty much right. I would add that another reason why complete loss of capital is “special” is because it is possible to recover from any non-complete loss via sufficiently high investing returns. But if you have $0, no matter how good a return you get, you’ll still have $0.