Thanks for the comments! I will respond to each of your numbered points.
The possibility of, say, extinction is a discount on utility, not on money. To see this, we can extend the formula for utility at time t. Suppose there are two possibilities for the future: extinction and non-extinction. The probability that we end up in the non-extinction world is e−δt, so the expected utility due to non-extinction is e−δtu(c(t)). We could also add to this the utility of extinction world, call it uE. Then total expected utility is (1−e−δt)uE+e−δtu(c(t)).
Then, we can say uE=0 to get the formula used in my essay. Or we can just say that we should ignore the uE term because there’s nothing we can do to change it (that’s assuming δ is not changeable, which is obviously not true in real life, but it’s true in the standard Ramsey model).
This wasn’t a particularly well-thought out statement, but it was basically on the assumption that we should converge on true beliefs over time.
Thanks for the link!
If you dig into this a little more, it becomes apparent that the Ramsey model with constant relative risk aversion doesn’t really make sense. In theory, people should accept a zero probability of losing all their assets, because that would result in negative infinity utility. But in practice, some small probability is acceptable, and in fact unavoidable. And people don’t try to get the probability of bankruptcy as low as possible, either.
But according to the theoretical model, asset prices move according to geometric Brownian motion, which means they can never go to 0. Therefore, losing all your assets is a distinct thing from assets having a negative return, and it has to happen due to some special event that’s not part of normal asset price changes. I realize this is kind of hand-wavey, but this is a commonly-used model in economics so at least I have good company in my handwaviness.
Example: Suppose δ=2. Say we have the chance to change this to δ=1. We would accept that deal, because decreasing δ has a bigger effect on utility than decreasing r. (You can construct situations where this is false, but it’s usually true.)
The possibility of, say, extinction is a discount on utility, not on money
By that, do you mean that extinction makes future utility less valuable? Or that it means there may be less future utility (because there are no humans to experience utility), for reasons unrelated to how effectively money can create utility?
(Sorry if this is already well-explained by your equations.)
2.
it was basically on the assumption that we should converge on true beliefs over time.
I think my quick take would be that that’s a plausible assumption, and that I definitely expect convergence towards the truth on average across areas, but that there seems a non-trivial chance of indefinitely failing to land on the truth itself in a given area. If that quick take is a reasonable one, then I think this might push slightly more in favour of work to estimate the philanthropic discount rate, as it means we’d have less reason to expect humanity to work it out eventually “by default”.
4. To check I roughly understood, is the following statement approximately correct? “The chance of events that leave one with no assets at all can’t be captured in the standard theoretical model, so we have to use a separate term for it, which is the expropriation rate. Whereas the chance of events that result in the loss of some but not all of one’s assets is already captured in the standard theoretical model, so we don’t include it in the expropriation rate.”
Future utility is not less valuable, but the possibility of extinction means there is a chance that future utility will not actualize, so we should discount the future based on this chance.
That’s pretty much right. I would add that another reason why complete loss of capital is “special” is because it is possible to recover from any non-complete loss via sufficiently high investing returns. But if you have $0, no matter how good a return you get, you’ll still have $0.
Thanks for the comments! I will respond to each of your numbered points.
The possibility of, say, extinction is a discount on utility, not on money. To see this, we can extend the formula for utility at time t. Suppose there are two possibilities for the future: extinction and non-extinction. The probability that we end up in the non-extinction world is e−δt, so the expected utility due to non-extinction is e−δtu(c(t)). We could also add to this the utility of extinction world, call it uE. Then total expected utility is (1−e−δt)uE+e−δtu(c(t)).
Then, we can say uE=0 to get the formula used in my essay. Or we can just say that we should ignore the uE term because there’s nothing we can do to change it (that’s assuming δ is not changeable, which is obviously not true in real life, but it’s true in the standard Ramsey model).
This wasn’t a particularly well-thought out statement, but it was basically on the assumption that we should converge on true beliefs over time.
Thanks for the link!
If you dig into this a little more, it becomes apparent that the Ramsey model with constant relative risk aversion doesn’t really make sense. In theory, people should accept a zero probability of losing all their assets, because that would result in negative infinity utility. But in practice, some small probability is acceptable, and in fact unavoidable. And people don’t try to get the probability of bankruptcy as low as possible, either.
But according to the theoretical model, asset prices move according to geometric Brownian motion, which means they can never go to 0. Therefore, losing all your assets is a distinct thing from assets having a negative return, and it has to happen due to some special event that’s not part of normal asset price changes. I realize this is kind of hand-wavey, but this is a commonly-used model in economics so at least I have good company in my handwaviness.
Example: Suppose δ=2. Say we have the chance to change this to δ=1. We would accept that deal, because decreasing δ has a bigger effect on utility than decreasing r. (You can construct situations where this is false, but it’s usually true.)
Thanks for this reply!
1.
By that, do you mean that extinction makes future utility less valuable? Or that it means there may be less future utility (because there are no humans to experience utility), for reasons unrelated to how effectively money can create utility?
(Sorry if this is already well-explained by your equations.)
2.
I think my quick take would be that that’s a plausible assumption, and that I definitely expect convergence towards the truth on average across areas, but that there seems a non-trivial chance of indefinitely failing to land on the truth itself in a given area. If that quick take is a reasonable one, then I think this might push slightly more in favour of work to estimate the philanthropic discount rate, as it means we’d have less reason to expect humanity to work it out eventually “by default”.
4. To check I roughly understood, is the following statement approximately correct? “The chance of events that leave one with no assets at all can’t be captured in the standard theoretical model, so we have to use a separate term for it, which is the expropriation rate. Whereas the chance of events that result in the loss of some but not all of one’s assets is already captured in the standard theoretical model, so we don’t include it in the expropriation rate.”
Future utility is not less valuable, but the possibility of extinction means there is a chance that future utility will not actualize, so we should discount the future based on this chance.
That’s pretty much right. I would add that another reason why complete loss of capital is “special” is because it is possible to recover from any non-complete loss via sufficiently high investing returns. But if you have $0, no matter how good a return you get, you’ll still have $0.