Might (applied) economists make a resurgence in longtermist EA?
Over the past few years I’ve had the impression that, within the longtermist EA movement, there isn’t really a place for economists unless they are doing highly academic global priorities research at places like the Global Priorities Institute.
Is this set to change given recent arguments about the importance of economic growth / technological progress for reducing total existential risk? There was Leopold Aschenbrenner’s impressive paper arguing that we should accelerate growth to raise the amount we spend on safety and speed through the time of perils.
Generally I get the impression that not much has been said about achieving existential security. It seems to me that boosting economic growth may be emerging as one of the most promising ways to do so. Could this mean that working as an economist, even outside of academia, on growth / innovation / technological progress becomes a very credible path e.g. in think tanks or government etc.? Are economists about to make a resurgence?
I think in general we should consider the possibility that we could just fund most of the useful x-risk work ourselves (by expected impact), since we have so much money ($50 billion and growing faster than the market) that we’re having a hard time spending. Accelerating growth broadly seems to accelerate risks without actually counterfactually giving us much more safety work. If anything, decelerating growth seems better, so we can have more time to work on safety.
In case it matters who gets a technology first, then targeted accelerating or decelerating might make sense.
(I’m saying this from a classical utilitarian perspective, which is not my view. I don’t think these conclusions follow for asymmetric views.)
My main objection to the idea that we can fund all useful x-risk ourselves is that what we really want to achieve is existential security, which may require global coordination. Global coordination isn’t exactly something you can easily fund.
Truth be told though I’m actually not entirely clear on the best pathways to existential security, and it’s something I’d like to see more discussion on.
Economic growth seems likely to accelerate AI development, without really increasing AI safety work. This might apply to other risks, although I think AI safety work is basically all from our community, so it applies especially here.
Might (applied) economists make a resurgence in longtermist EA?
Over the past few years I’ve had the impression that, within the longtermist EA movement, there isn’t really a place for economists unless they are doing highly academic global priorities research at places like the Global Priorities Institute.
Is this set to change given recent arguments about the importance of economic growth / technological progress for reducing total existential risk? There was Leopold Aschenbrenner’s impressive paper arguing that we should accelerate growth to raise the amount we spend on safety and speed through the time of perils.
Phil Trammell has also written favourably about the argument, but it still seemed fairly fringe in the movement...until now?
It appears that Will MacAskill is taking a similar argument about the dangers of technological stagnation very seriously now as he said at EA Global in his fireside chat around the 7-minute mark.
Generally I get the impression that not much has been said about achieving existential security. It seems to me that boosting economic growth may be emerging as one of the most promising ways to do so. Could this mean that working as an economist, even outside of academia, on growth / innovation / technological progress becomes a very credible path e.g. in think tanks or government etc.? Are economists about to make a resurgence?
Any thoughts welcome!
I think in general we should consider the possibility that we could just fund most of the useful x-risk work ourselves (by expected impact), since we have so much money ($50 billion and growing faster than the market) that we’re having a hard time spending. Accelerating growth broadly seems to accelerate risks without actually counterfactually giving us much more safety work. If anything, decelerating growth seems better, so we can have more time to work on safety.
In case it matters who gets a technology first, then targeted accelerating or decelerating might make sense.
(I’m saying this from a classical utilitarian perspective, which is not my view. I don’t think these conclusions follow for asymmetric views.)
My main objection to the idea that we can fund all useful x-risk ourselves is that what we really want to achieve is existential security, which may require global coordination. Global coordination isn’t exactly something you can easily fund.
Truth be told though I’m actually not entirely clear on the best pathways to existential security, and it’s something I’d like to see more discussion on.
Economic growth seems likely to accelerate AI development, without really increasing AI safety work. This might apply to other risks, although I think AI safety work is basically all from our community, so it applies especially here.