I’m a biologist in my 30s working on cures for chronic lung diseases. I’ve followed AI developments closely over the past 3 years. Holy smokes it’s moving fast. But I have neither the technical skills nor the policy conviction to do anything about AI safety.
And I have signed the Giving What We Can pledge 🔸.
If superintelligence is coming soon and goes horribly, then I won’t be around to help anyone in 2040. If superintelligence is coming soon and goes wonderfully, then no one will need my help that badly in 2040.
Those two extreme scenarios both push me to aggressively donate to global health in the near term. While I still can.
Does anyone else feel this way? Does anyone in a similar scenario to me see things differently?
I don’t think that the possible outcomes of AGI/​superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.
In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be billions of people out of work without significant savings and in need of assistance.
I also think even if AGI goes well for humans, that doesn’t necessarily translate into going well for animals. Animal welfare could still be a significant cause area in a post-AGI future and by saving more now, you would have more to donate then (potentially a lot more if returns to capital are high).