I don’t think that the possible outcomes of AGI/superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.
In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be billions of people out of work without significant savings and in need of assistance.
I also think even if AGI goes well for humans, that doesn’t necessarily translate into going well for animals. Animal welfare could still be a significant cause area in a post-AGI future and by saving more now, you would have more to donate then (potentially a lot more if returns to capital are high).
I don’t think that the possible outcomes of AGI/superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.
In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be billions of people out of work without significant savings and in need of assistance.
I also think even if AGI goes well for humans, that doesn’t necessarily translate into going well for animals. Animal welfare could still be a significant cause area in a post-AGI future and by saving more now, you would have more to donate then (potentially a lot more if returns to capital are high).