I’m a bit late to the party here, but just listened to the episode. Really enjoyed it and found it thought provoking, so thanks to Phil Trammel and the 80k podcast.
I had a couple of questions, so here’s hoping Phil’s still monitoring this post! Or, if not, that someone else is happy to answer these questions.
First, there was section where I think it was suggested the x-risks can be treated as feeding straight into interest rates, such that the premium for investing still outweighs the benefits of spending now to try to reduce those x-risks. The example was given that even if x-risk is at 1% per year, it’s still better to invest over the long term. But the issue that came to mind was that, if x-risk is at 1% then the expected lifespan of humanity is just another ~68 years. And, the likelihood that there is anything around to spend on in the very long terms, say 300 years, becomes quite low; below 5%. I haven’t been able to work out how the math on this would look, but doesn’t that reduced expected lifespan of humanity outweigh the upside to be earned by compounding interest rates over the very long term?
Second, is it a problem (a reductio problem, I suppose) that the logic supporting investing over spending now would apply equally at any point in time. Such that, when considering the basic premise that investment returns are higher than present expenditure returns, there would in fact never be a time when spending is the optimal strategy. So, in 200 years, when this patient investment fund has grown very large, a group of rational people sitting down and deciding what to do will again decide to just keep investing and never spending?
Finally, there was some discussion to the effect that an advantage of investing is that our understanding of ethics is likely to keep developing, such that we will be in a better position to decide how funds can best be applied in the future. But this could equally mean that, down the track, we realise that the ethics of longtermism were flawed or much less persuasive than previously thought. If that’s the case, then a fund locked into patient philanthropy could be a big mistake, no?
Anyway, just wanted to share the thoughts I’d had in case anyone has good answers or is interested in discussing.
Thanks again for the podcast.
Cheers
Putting aside any debate over the relative values you’ve assigned here, I think you might be making a error by the way that you try to translate relative moral harms into a dollar value, using the cost of extending a person’s life through donation to GiveWell’s charities.
To give an absurd example, the ‘harm’ caused if I were to punch a stranger in the face (assuming that I hurt them, but don’t otherwise cause any permanent damage) is a fraction of the harm caused if I were to take a year off that person’s life (which you have said can be valued at $100). Let’s say it’s at most 1/10th as bad as to punch someone in the face than to prematurely end their life.
However, even if I were to get more than $10 of enjoyment out of punching that person, I don’t think it’s right that I’m morally permitted to do so.
One reason is that although, at the margin, the cheapest available method for extending human lives by a year is $100, I don’t think that necessarily reflects the true value of a year of human life for these purposes. The price is likely to be a product of market inefficiencies (noting, for example, that in the developed world, people regularly spending many times that amount in order to extend life by a year). Also, I would certainly pay more than $100 to extend my life by year, and no doubt so would the person who is being punched. It just happens that GiveWell have identified some unusually efficient programs for extending human life. Those programs do not reflect the market price, at equilibrium, for a year of human life.
I’d like to put more thought into this, but I’m presently convinced you’re making a mistake with this move.
Secondly, I think that it’s wrong to come to the conclusion that something is not a ‘serious’ moral wrong, just because the harm caused is a fraction of the harm caused by ending a human life. Perhaps ending a human life prematurely is very high on the moral spectrum, such that something 1/100th as bad, as still quite a bad thing from a moral and utilitarian perspective.
Anyway, it’s a good debate to be having, even if I don’t reach the same conclusions you do.
(P.S. First post on EA forums, so apologies if I’m getting any etiquitte wrong or rehashing ideas that have previously been debated and resolved).