All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we’ve been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn’t coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world’s most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well—potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we’ve been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn’t coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world’s most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well—potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.