> 2. Furthermore, one said “whenever major technological developments happen, everyone gets a promotion”—everyone’s job will be slightly more interesting and slightly less grunt-work-y.
Interesting framing. It’s true in a way, and also means that people need to learn more in order to contribute meaningfully. A reasonably productive worker used to be a labourer, then they needed to be literate (which takes a very long time!), then they needed to read up more and more institutional knowledge and leverage increasingly complex tools.
> 3. However, on longer timescales, one noted that economists’ predictions on employment rates contradicted their own predictions on how much work AI would do. He said he thought economists generally have a status quo bias.
I think people in general have a status quo bias. Even people working within AI spent most of their lives in a non-AI world.
> 4. On the other hand, he thought AI researchers who predicted massive societal revolution generally had an “excitement bias.”
This is pretty much what I believe about AI predictions as well. The closer someone works with AI, the more likely they are to overestimate it. For AI Safety, I think my peers have a bias towards overestimating progress and how soon timelines are. However, I’d still prefer to err on the side of overestimation when playing Russian Roulette with humanity’s future.
> 7. and, maybe most interestingly and comfortingly (?), as soon as it’s normalized for AI to, e.g., write, illustrate, decide, summarize, compare, research, etc, we’ll turn around and call it bizarre that we ever did those things, and no one will feel particularly bad about it.
I feel weird seeing these opinions. I write and make art occasionally. It’s soooo tedious and time-intensive to actually produce things (edit sentences, writer’s block, staring at a drawing for 2 hours to figure out that you drew the jawline 20% to the left etc.). And honestly, you have to practice a lot just to produce something of passable quality. Seeing people who don’t do [X] gatekeep [X] feels weird, because people who actually do [X] are spending 80% of their time on fairly basic, low-level mental tasks.
I’m a little confused about what you’re responding to with your last point. Are you saying I’m gatekeeping some X without doing X? In any case I agree taking grunt work out of artistic production is like to be good in some ways. But also I feel like the changes are going to be so deep that it’ll sort of render those questions moot. Like if expert-level beautiful animation is generatable at the click of a button, and you know it’s AI generated, it’ll change how it feels to consume that animation I think, to a degree that overwhelms anything else. I think people want to feel the hand of the artist and art creation will bend around that need. Though it’s also possible I’m over-generalizing from my own experience.
> 2. Furthermore, one said “whenever major technological developments happen, everyone gets a promotion”—everyone’s job will be slightly more interesting and slightly less grunt-work-y.
Interesting framing. It’s true in a way, and also means that people need to learn more in order to contribute meaningfully. A reasonably productive worker used to be a labourer, then they needed to be literate (which takes a very long time!), then they needed to read up more and more institutional knowledge and leverage increasingly complex tools.
> 3. However, on longer timescales, one noted that economists’ predictions on employment rates contradicted their own predictions on how much work AI would do. He said he thought economists generally have a status quo bias.
I think people in general have a status quo bias. Even people working within AI spent most of their lives in a non-AI world.
> 4. On the other hand, he thought AI researchers who predicted massive societal revolution generally had an “excitement bias.”
This is pretty much what I believe about AI predictions as well. The closer someone works with AI, the more likely they are to overestimate it. For AI Safety, I think my peers have a bias towards overestimating progress and how soon timelines are. However, I’d still prefer to err on the side of overestimation when playing Russian Roulette with humanity’s future.
> 7. and, maybe most interestingly and comfortingly (?), as soon as it’s normalized for AI to, e.g., write, illustrate, decide, summarize, compare, research, etc, we’ll turn around and call it bizarre that we ever did those things, and no one will feel particularly bad about it.
I feel weird seeing these opinions. I write and make art occasionally. It’s soooo tedious and time-intensive to actually produce things (edit sentences, writer’s block, staring at a drawing for 2 hours to figure out that you drew the jawline 20% to the left etc.). And honestly, you have to practice a lot just to produce something of passable quality. Seeing people who don’t do [X] gatekeep [X] feels weird, because people who actually do [X] are spending 80% of their time on fairly basic, low-level mental tasks.
I’m a little confused about what you’re responding to with your last point. Are you saying I’m gatekeeping some X without doing X? In any case I agree taking grunt work out of artistic production is like to be good in some ways. But also I feel like the changes are going to be so deep that it’ll sort of render those questions moot. Like if expert-level beautiful animation is generatable at the click of a button, and you know it’s AI generated, it’ll change how it feels to consume that animation I think, to a degree that overwhelms anything else. I think people want to feel the hand of the artist and art creation will bend around that need. Though it’s also possible I’m over-generalizing from my own experience.