But investors are not perfectly patient; they discount their future welfare at some positive rate.
As Michael alluded to, I would expect that the primary explanation for positive real return rates is that people are risk averse. I don’t think this changes the conclusion much, qualitatively the rest of the argument would still follow in this case, though the math would be different.
the indifference point of 7% returns implies that the rate at which the cost of welfare is rising (R) is only 5%.
For the people who are actually indifferent at a rate of 7%. I would expect that people in extreme poverty and factory farmed animals don’t usually make this choice, so this argument says nothing about them. Similarly, most people don’t care about the far future in proportion to its size, so you can’t take their choices about it as much evidence.
Because of this, I would take the stock market + people’s risk aversion as evidence that investing to give later is probably better if you are trying to benefit only the people who invest in the stock market.
I think risk-aversion and pure time preference are most likely both at play—I say a few more words about this in my response to Michael above—but yeah, fair enough.
With regard to your second point: I thought I was addressing this objection with,
If this is true, then indeed, we would do less good giving next year than giving this year.
But this one-year relationship must be temporary. Over the course of a long future, the rate of increase in the cost of producing a unit of welfare as efficiently as possible cannot, on average, exceed R. Otherwise, the most efficient way to good would eventually be more costly than one particular way to good—just giving money to ordinary investors for their own consumption. And since the long-run average rate of increase in the cost of welfare is bounded above by R (“5%”), investing at R + RPTP (“7%”) must eventually result in an endowment able to buy more welfare than the endowment we started with.
My point here is that, sure, maybe for farm animals, people in extreme poverty, and so on, the cost of helping them is currently growing more expensive at some rate greater than R (so, >5% per year, if R = 5%). But since the cost of helping a typical stock market investor is only growing more expensive at R (“5% per year”), eventually the curves have to cross. So over the long run, the cheapest way of “buying a unit of welfare” seems to be growing at a rate bounded above by R.
Does that make sense, or am I misunderstanding you?
I see, that makes more sense. Yeah, I agree that that paragraph addresses my objection, I don’t think I understood it fully the first time around.
My new epistemic status is that I don’t see any flaws in the argument but it seems fishy—it seems strange that an assumption as weak as the existence of even one investor means you should save.
As Michael alluded to, I would expect that the primary explanation for positive real return rates is that people are risk averse. I don’t think this changes the conclusion much, qualitatively the rest of the argument would still follow in this case, though the math would be different.
For the people who are actually indifferent at a rate of 7%. I would expect that people in extreme poverty and factory farmed animals don’t usually make this choice, so this argument says nothing about them. Similarly, most people don’t care about the far future in proportion to its size, so you can’t take their choices about it as much evidence.
Because of this, I would take the stock market + people’s risk aversion as evidence that investing to give later is probably better if you are trying to benefit only the people who invest in the stock market.
I think risk-aversion and pure time preference are most likely both at play—I say a few more words about this in my response to Michael above—but yeah, fair enough.
With regard to your second point: I thought I was addressing this objection with,
My point here is that, sure, maybe for farm animals, people in extreme poverty, and so on, the cost of helping them is currently growing more expensive at some rate greater than R (so, >5% per year, if R = 5%). But since the cost of helping a typical stock market investor is only growing more expensive at R (“5% per year”), eventually the curves have to cross. So over the long run, the cheapest way of “buying a unit of welfare” seems to be growing at a rate bounded above by R.
Does that make sense, or am I misunderstanding you?
I see, that makes more sense. Yeah, I agree that that paragraph addresses my objection, I don’t think I understood it fully the first time around.
My new epistemic status is that I don’t see any flaws in the argument but it seems fishy—it seems strange that an assumption as weak as the existence of even one investor means you should save.