I don’t think that the possible outcomes of AGI/superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.
In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be billions of people out of work without significant savings and in need of assistance.
I also think even if AGI goes well for humans, that doesn’t necessarily translate into going well for animals. Animal welfare could still be a significant cause area in a post-AGI future and by saving more now, you would have more to donate then (potentially a lot more if returns to capital are high).
Hi Matthew,
Thank you for your comment. I think this is a reasonable criticism! There is definitely an endogenous link between investment and AI timelines that this model misses. I think that this might be hard to model in a realistic way, but I encourage people to try!
On the other hand, I think the strategic motivation is important as well. For example, here is Satya Nadella on the Dwarkesh Podcast:
In reality, both mechanisms are probably in play. My paper is intended to focus on the race mechanism.
Two more notes: higher savings imply lower consumption in the short term. However, even if TAI isn’t invented, consumption will rise higher than in the stationary equilibrium purely from capital accumulation.
Lastly, the main thrust of the paper is on the implications for interest rates, I do not intend to make strong claims about social welfare.