@alx, thank you for addressing this crucial topic. I totally agree that the macro risks[1]ofaligned AI are often overlooked compared to risk of misaligned AI (notwithstanding both are crucial). My impression is that the EA community is focused significantly more on the risk of misaligned AI. Consider, however, that Metaculus estimates of the odds of misaligned AI (“Paperclipalypse”) at only about half as likely as an “AI-Dystopia.”
The future for labor is indeed one of the bigger macro risks, but it’s also one of the more tangible ones, making it arguably less neglected in terms of research. For example, your first call to action is to prioritize “Analysis and segmentation of all top job functions within each industry and their loss exposure.” This work has largely been done already, insofar as government primary source data exists in the US and EU. I personally led research in quantifying labor exposures,[2] and I’ll readily admit this work is largely duplicative vs. what many others have also done. I’ll also be frank that such point forecasts are inherently wild guesstimates, given huge unknowns around technological capabilities, complementary innovations, and downstream implementation.
A couple suggestions for macro discussions:
When citing any analysis, time horizon is important; different horizons might coincide with very different conclusions (most studies of job displacement consider a 10-year horizon).
As we consider impacts over time, we should also differentiate between transitional/frictional unemployment and more structural/permanent unemployment. It’s one thing to lose a job and potentially have to re-train to do something else; it’s another if there aren’t even jobs to get re-trained for (a distinction that probably matters for something like deaths of despair[3]). I’m far more concerned about the latter type of unemployment, and that’s where the UBI would need to come in, but that’s frankly the easy problem. The far bigger challenge is how we reorganize society and maintain “social engagement” once work is no longer central?
These critiques aside, this post is great and again I totally agree with your core point.
There are other AI-related macro risks that extend beyond—and might exceed—employment. I’ll share a post soon with my thoughts on those. For now, I’ll just say: As we approach this brave new world, we should be preparing not only by trying to find all the answers, but also by building better systems to be able to find answers in the future, and to be more resilient, reactive, and coordinated as a global society. Perhaps this is what you meant by “innovate solutions and creative risk mitigations with novel technologies”? If so, then you and I are of the same mind.
Michael Albrecht and Stephanie Aliaga, “The Transformative Power of Generative AI,” J.P.Morgan Asset Management, September 30, 2023. I co-authored this high-level “primer” covering various macro implications of AI. It covers a lot of bases, so feel free to skip around if you’re already very familiar with some of the content.
I have reservations about your specific mortality rate analysis, but I’ll save that discussion for another time. I do appreciate and agree with your broader perspective.
Thanks for the thoughtful feedback (and for being so candid that so many economics research forecasts are often ‘wild guesstimates’ ) Couldn’t agree more. That said it does seem to me that with additional independent high-quality research in areas like this, we could come to more accurate collective aggregate meta forecasts.
I suspect some researchers completely ignore that just because something can be automated (has high potential exposure), doesn’t mean it will. I suspect (as Bostrom points out in Deep Utopia) we’ll find that many jobs will be either legally shielded or socially insulated due to human preferences (eg. a masseuse or judge or child-care provider or musician or bartender) . All are highly capable of being automated but most people prefer them to be done by a human for various reasons.
Regarding probability of paperclips vs economic dystopia ( assuming paperclips in this case are a metaphor/stand-in for actual realistic AI threats) I don’t think anyone takes the paperclip optimizer literally - it entirely depends on timeline. This is why I was repetitive to qualify I’m referring to the next 20 years. I do think that catastrophic risks increases significantly as AI increasingly permeates supply chains, military, transportation and various other critical infrastructure.
I’d be curious to hear more about what research you’re doing now. Will reach out privately.
@alx, thank you for addressing this crucial topic. I totally agree that the macro risks[1] of aligned AI are often overlooked compared to risk of misaligned AI (notwithstanding both are crucial). My impression is that the EA community is focused significantly more on the risk of misaligned AI. Consider, however, that Metaculus estimates of the odds of misaligned AI (“Paperclipalypse”) at only about half as likely as an “AI-Dystopia.”
The future for labor is indeed one of the bigger macro risks, but it’s also one of the more tangible ones, making it arguably less neglected in terms of research. For example, your first call to action is to prioritize “Analysis and segmentation of all top job functions within each industry and their loss exposure.” This work has largely been done already, insofar as government primary source data exists in the US and EU. I personally led research in quantifying labor exposures,[2] and I’ll readily admit this work is largely duplicative vs. what many others have also done. I’ll also be frank that such point forecasts are inherently wild guesstimates, given huge unknowns around technological capabilities, complementary innovations, and downstream implementation.
A couple suggestions for macro discussions:
When citing any analysis, time horizon is important; different horizons might coincide with very different conclusions (most studies of job displacement consider a 10-year horizon).
As we consider impacts over time, we should also differentiate between transitional/frictional unemployment and more structural/permanent unemployment. It’s one thing to lose a job and potentially have to re-train to do something else; it’s another if there aren’t even jobs to get re-trained for (a distinction that probably matters for something like deaths of despair[3]). I’m far more concerned about the latter type of unemployment, and that’s where the UBI would need to come in, but that’s frankly the easy problem. The far bigger challenge is how we reorganize society and maintain “social engagement” once work is no longer central?
These critiques aside, this post is great and again I totally agree with your core point.
There are other AI-related macro risks that extend beyond—and might exceed—employment. I’ll share a post soon with my thoughts on those. For now, I’ll just say: As we approach this brave new world, we should be preparing not only by trying to find all the answers, but also by building better systems to be able to find answers in the future, and to be more resilient, reactive, and coordinated as a global society. Perhaps this is what you meant by “innovate solutions and creative risk mitigations with novel technologies”? If so, then you and I are of the same mind.
By “macro,” I mean everything from macroeconomic (including labor) to political, geopolitical, and societal implications.
Michael Albrecht and Stephanie Aliaga, “The Transformative Power of Generative AI,” J.P.Morgan Asset Management, September 30, 2023. I co-authored this high-level “primer” covering various macro implications of AI. It covers a lot of bases, so feel free to skip around if you’re already very familiar with some of the content.
I have reservations about your specific mortality rate analysis, but I’ll save that discussion for another time. I do appreciate and agree with your broader perspective.
Thanks for the thoughtful feedback (and for being so candid that so many economics research forecasts are often ‘wild guesstimates’ ) Couldn’t agree more. That said it does seem to me that with additional independent high-quality research in areas like this, we could come to more accurate collective aggregate meta forecasts.
I suspect some researchers completely ignore that just because something can be automated (has high potential exposure), doesn’t mean it will. I suspect (as Bostrom points out in Deep Utopia) we’ll find that many jobs will be either legally shielded or socially insulated due to human preferences (eg. a masseuse or judge or child-care provider or musician or bartender) . All are highly capable of being automated but most people prefer them to be done by a human for various reasons.
Regarding probability of paperclips vs economic dystopia ( assuming paperclips in this case are a metaphor/stand-in for actual realistic AI threats) I don’t think anyone takes the paperclip optimizer literally - it entirely depends on timeline. This is why I was repetitive to qualify I’m referring to the next 20 years. I do think that catastrophic risks increases significantly as AI increasingly permeates supply chains, military, transportation and various other critical infrastructure.
I’d be curious to hear more about what research you’re doing now. Will reach out privately.