Thanks for the thoughtful feedback (and for being so candid that so many economics research forecasts are often ‘wild guesstimates’ ) Couldn’t agree more. That said it does seem to me that with additional independent high-quality research in areas like this, we could come to more accurate collective aggregate meta forecasts.
I suspect some researchers completely ignore that just because something can be automated (has high potential exposure), doesn’t mean it will. I suspect (as Bostrom points out in Deep Utopia) we’ll find that many jobs will be either legally shielded or socially insulated due to human preferences (eg. a masseuse or judge or child-care provider or musician or bartender) . All are highly capable of being automated but most people prefer them to be done by a human for various reasons.
Regarding probability of paperclips vs economic dystopia ( assuming paperclips in this case are a metaphor/stand-in for actual realistic AI threats) I don’t think anyone takes the paperclip optimizer literally - it entirely depends on timeline. This is why I was repetitive to qualify I’m referring to the next 20 years. I do think that catastrophic risks increases significantly as AI increasingly permeates supply chains, military, transportation and various other critical infrastructure.
I’d be curious to hear more about what research you’re doing now. Will reach out privately.
Thanks for the thoughtful feedback (and for being so candid that so many economics research forecasts are often ‘wild guesstimates’ ) Couldn’t agree more. That said it does seem to me that with additional independent high-quality research in areas like this, we could come to more accurate collective aggregate meta forecasts.
I suspect some researchers completely ignore that just because something can be automated (has high potential exposure), doesn’t mean it will. I suspect (as Bostrom points out in Deep Utopia) we’ll find that many jobs will be either legally shielded or socially insulated due to human preferences (eg. a masseuse or judge or child-care provider or musician or bartender) . All are highly capable of being automated but most people prefer them to be done by a human for various reasons.
Regarding probability of paperclips vs economic dystopia ( assuming paperclips in this case are a metaphor/stand-in for actual realistic AI threats) I don’t think anyone takes the paperclip optimizer literally - it entirely depends on timeline. This is why I was repetitive to qualify I’m referring to the next 20 years. I do think that catastrophic risks increases significantly as AI increasingly permeates supply chains, military, transportation and various other critical infrastructure.
I’d be curious to hear more about what research you’re doing now. Will reach out privately.