Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
It seems like many more people are on board with the idea that transformative ai may come soon, let’s say within the next 10 years. This pretty clearly has ramifications for people working on longtermist causes areas but I think it should probably affect some neartermist cause prioritisation as well.
If you think that AI will go pretty well by default (which I think many neartermists do) I think you should expect to see extremely rapid economic growth as more and more of industry is delegated to AI systems.
I’d guess that you should be much less excited about interventions like deworming or other programs that are aimed at improving people’s economic position over a number of decades. Even if you think the economic boosts from deworming and ai will stack, and you won’t have sharply diminishing returns on well-being with wealth I think you should be especially uncertain about your ability to predict the impact of actions in the crazy advanced ai world (which would generally make me more pessimistic about how useful the thing I’m working on is).
I don’t have a great sense of what the neartermists who think AI will go well should do. I’m guessing some could work on accelerating capabilities though I think that’s pretty uncooperative. It’s plausible that saving lives now is more valuable than before if you think they might be uploaded but I’m not sure there is that much of a case for this being super exciting from a consequentialist world view when you can easily duplicate people. I think working on ‘normy’ ai policy is pretty plausible or trying to help governments orient to very rapid economic growth (maybe in a similar way to how various nonprofits helped governments orient to covid).
To significantly change strategy, I think one would need to not only believe “AI will go well” but specifically believe that AI will go well for people of low-to-middle socioeconomic status in developing countries. The economic gains from recent technological explosions (e.g., industrialization, the computing economy) have not lifted all boats equally. There’s no guarantee that gaining the technological ability to easily achieve certain humanitarian goals means that we will actually achieve them, and recent history makes me pretty skeptical that it will quickly happen this time.
I’m not an expert but I’d be fairly surprised if the Industrial Revolution didn’t do more to lift people in LMICs out of poverty than any known global health intervention even if you think it increased inequality. Would be open to taking bets on concrete claims here if we can operationalise one well.
I think the Industrial Revolution and other technological explosions very likely did (or will) have an overall anti-poverty impact . . . but I think that impact happened over a considerable amount of time and was not of the magnitude one might have hoped for. In a capitalist system, people who are far removed from the technological improvements often do benefit from them without anyone directing effort at that goal. However, in part because the benefits are indirect, they are often not quick.
So the question isn’t “when will transformational AI exist” but “when will transformational AI have enough of an impact on the wellbeing of economic-development-program beneficiaries that it significantly undermines the expected benefits of those programs?” Before updating too much on the next-few-decades impact of AI on these beneficiaries, I’d want to see concrete evidence of social/legal changes that gave me greater confidence that the benefits of an AI explosion would quickly and significantly reach them. And presumably the people involved in this work modeled a fairly high rate of baseline economic growth in the countries they are working in, so massive AI-caused economic improvement for those beneficiaries (say) 30+ years from now may have relatively modest impact in their models anyway.
Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
I think so, see here or here for a bit more discussion on this
If you think that AI will go pretty well by default (which I think many neartermists do)
My guess/impression is that this just hasn’t been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a “longtermist” worldview)
Very half baked
Should recent ai progress change the plans of people working on global health who are focused on economic outcomes?
It seems like many more people are on board with the idea that transformative ai may come soon, let’s say within the next 10 years. This pretty clearly has ramifications for people working on longtermist causes areas but I think it should probably affect some neartermist cause prioritisation as well.
If you think that AI will go pretty well by default (which I think many neartermists do) I think you should expect to see extremely rapid economic growth as more and more of industry is delegated to AI systems.
I’d guess that you should be much less excited about interventions like deworming or other programs that are aimed at improving people’s economic position over a number of decades. Even if you think the economic boosts from deworming and ai will stack, and you won’t have sharply diminishing returns on well-being with wealth I think you should be especially uncertain about your ability to predict the impact of actions in the crazy advanced ai world (which would generally make me more pessimistic about how useful the thing I’m working on is).
I don’t have a great sense of what the neartermists who think AI will go well should do. I’m guessing some could work on accelerating capabilities though I think that’s pretty uncooperative. It’s plausible that saving lives now is more valuable than before if you think they might be uploaded but I’m not sure there is that much of a case for this being super exciting from a consequentialist world view when you can easily duplicate people. I think working on ‘normy’ ai policy is pretty plausible or trying to help governments orient to very rapid economic growth (maybe in a similar way to how various nonprofits helped governments orient to covid).
To significantly change strategy, I think one would need to not only believe “AI will go well” but specifically believe that AI will go well for people of low-to-middle socioeconomic status in developing countries. The economic gains from recent technological explosions (e.g., industrialization, the computing economy) have not lifted all boats equally. There’s no guarantee that gaining the technological ability to easily achieve certain humanitarian goals means that we will actually achieve them, and recent history makes me pretty skeptical that it will quickly happen this time.
I’m not an expert but I’d be fairly surprised if the Industrial Revolution didn’t do more to lift people in LMICs out of poverty than any known global health intervention even if you think it increased inequality. Would be open to taking bets on concrete claims here if we can operationalise one well.
I think the Industrial Revolution and other technological explosions very likely did (or will) have an overall anti-poverty impact . . . but I think that impact happened over a considerable amount of time and was not of the magnitude one might have hoped for. In a capitalist system, people who are far removed from the technological improvements often do benefit from them without anyone directing effort at that goal. However, in part because the benefits are indirect, they are often not quick.
So the question isn’t “when will transformational AI exist” but “when will transformational AI have enough of an impact on the wellbeing of economic-development-program beneficiaries that it significantly undermines the expected benefits of those programs?” Before updating too much on the next-few-decades impact of AI on these beneficiaries, I’d want to see concrete evidence of social/legal changes that gave me greater confidence that the benefits of an AI explosion would quickly and significantly reach them. And presumably the people involved in this work modeled a fairly high rate of baseline economic growth in the countries they are working in, so massive AI-caused economic improvement for those beneficiaries (say) 30+ years from now may have relatively modest impact in their models anyway.
I think so, see here or here for a bit more discussion on this
My guess/impression is that this just hasn’t been discussed by neartermists very much (which I think is one sad side-effect from bucketing all AI stuff in a “longtermist” worldview)