This is a linkpost for my own article for Wired, which I’ve been encouraged to post here. It’s mostly about AI becoming capable of the kinds of work white-collar workers derive identity, self-worth, and status from, and draws from what’s happened with professional Go players who’ve been beaten by AI models.
For this piece I spoke with Michael Webb, whose study of AI patents in 2019 presaged a lot of what we’re seeing in the present moment, and Gregory Clark, a professor emeritus of economics at UC Davis. Also, I didn’t end up mentioning them by name, but I spoke with Philip Trammell and Robin Hanson, who were both very helpful.
I actually had a contact at DeepMind who almost connected me with Fan Hui too, but that didn’t end up working out.
Anyway, I posted an out-take reel of a few points to Twitter, which maybe I’ll just include below (partly so I can correct typos):
A few nuggets that didn’t make it into my Wired piece about AI becoming capable of white collar work, in no particular order:
1. All 4 economists I talked to thought employment rates wouldn’t go down for the foreseeable future. Very strong “don’t worry” vibes
2. Furthermore, one said “whenever major technological developments happen, everyone gets a promotion”—everyone’s job will be slightly more interesting and slightly less grunt-work-y.
3. However, on longer timescales, one noted that economists’ predictions on employment rates contradicted their own predictions on how much work AI would do. He said he thought economists generally have a status quo bias.
4. On the other hand, he thought AI researchers who predicted massive societal revolution generally had an “excitement bias.”
This matches my observations.
5. Based on a @vgr book review, I waded through the incredibly dense “A Tenth Of A Second” by Canales, and learned how fluid our expectations are about human capabilities. There was immense resistance in 19C to letting technology replace the human eye for scientific observation.
6. Now it seems unthinkable that we’d choose to rely on the naked human eye for, e.g., measuring distances and speeds of planets, when we could use a highly calibrated tool.
This makes me expect that a lot of other human abilities will get offloaded to AI very soon,
7. and, maybe most interestingly and comfortingly (?), as soon as it’s normalized for AI to, e.g., write, illustrate, decide, summarize, compare, research, etc, we’ll turn around and call it bizarre that we ever did those things, and no one will feel particularly bad about it.
8. Informally, a lot of people including myself expect to see a wave of ‘lo-fi’/handcrafted art, where the *point* is the absence of tech. ‘Artisanal’ literature, music, movies, etc.
9. Writing already looks different to me and I expect we’ll soon think very differently about a lot of cultural things that have previously been imperceivably interwoven with their human origin, in the same way we learn how brains work when parts of them get damaged.
If ChatGPT can do my job, then mission accomplished! I can retire happy!
> 2. Furthermore, one said “whenever major technological developments happen, everyone gets a promotion”—everyone’s job will be slightly more interesting and slightly less grunt-work-y.
Interesting framing. It’s true in a way, and also means that people need to learn more in order to contribute meaningfully. A reasonably productive worker used to be a labourer, then they needed to be literate (which takes a very long time!), then they needed to read up more and more institutional knowledge and leverage increasingly complex tools.
> 3. However, on longer timescales, one noted that economists’ predictions on employment rates contradicted their own predictions on how much work AI would do. He said he thought economists generally have a status quo bias.
I think people in general have a status quo bias. Even people working within AI spent most of their lives in a non-AI world.
> 4. On the other hand, he thought AI researchers who predicted massive societal revolution generally had an “excitement bias.”
This is pretty much what I believe about AI predictions as well. The closer someone works with AI, the more likely they are to overestimate it. For AI Safety, I think my peers have a bias towards overestimating progress and how soon timelines are. However, I’d still prefer to err on the side of overestimation when playing Russian Roulette with humanity’s future.
> 7. and, maybe most interestingly and comfortingly (?), as soon as it’s normalized for AI to, e.g., write, illustrate, decide, summarize, compare, research, etc, we’ll turn around and call it bizarre that we ever did those things, and no one will feel particularly bad about it.
I feel weird seeing these opinions. I write and make art occasionally. It’s soooo tedious and time-intensive to actually produce things (edit sentences, writer’s block, staring at a drawing for 2 hours to figure out that you drew the jawline 20% to the left etc.). And honestly, you have to practice a lot just to produce something of passable quality. Seeing people who don’t do [X] gatekeep [X] feels weird, because people who actually do [X] are spending 80% of their time on fairly basic, low-level mental tasks.
I’m a little confused about what you’re responding to with your last point. Are you saying I’m gatekeeping some X without doing X? In any case I agree taking grunt work out of artistic production is like to be good in some ways. But also I feel like the changes are going to be so deep that it’ll sort of render those questions moot. Like if expert-level beautiful animation is generatable at the click of a button, and you know it’s AI generated, it’ll change how it feels to consume that animation I think, to a degree that overwhelms anything else. I think people want to feel the hand of the artist and art creation will bend around that need. Though it’s also possible I’m over-generalizing from my own experience.
Appreciate that you got me thinking slightly more hopeful about AI, in your anecdotes that built into your last paragraph. Thank you!
As a side note, I also appreciate your quick point… about men possibly being disproportionately affected by some incoming negatives. -- I am worried about our current misunderstandings of male distress; and how these already existing problems could be exacerbated by AI, as you mention. Thanks for bringing this up.