What would a plausible capabilities timeline look like, such that we could mark off progress against it?
Rather than replacing jobs in order of the IQ of humans that typically end up doing them (the naive anthropocentric view of “robots getting smarter”), what actually seems to be happening is that AI and robotics develop capabilities for only part of a job at a time, but they do it cheap and fast, and so there’s an incentive for companies/professions to restructure to take advantage of AI. Progressions of jobs eliminated is therefore going to be weird and sometimes ill-defined. So it’s probably better to try to make a timeline of capabilities, rather than a timeline of doable jobs.
Actually, this probably requires brainstorming from people more in-touch with machine learning than me. But for starters, human-level performance on all current quantifiable benchmarks (from Allen Institute’s benchmark of primary-school test questions [easy?] to Mine-RL BASALT [hard?]) would be very impressive.
I think it’s useful to talk about job displacement as well, even if it’s partial rather than full. We’ve talked about job displacement due to automation (most of which is unrelated to AI) for centuries, and it seems useful to me. It doesn’t assume that machines (e.g. AI) are solving tasks in the same way as humans would do; only that they reduce the need for human labour. Though I guess it depends on what you want to do—for some purposes, it may be more useful to look at AI capabilities regarding more specific tasks.
That’s a good point. I’m a little worried that coarse-grained metrics like “% unemployment” or “average productivity of labor vs. capital” could fail to track AI progress if AI increases the productivity of labor. But we could pick specific tasks like making a pencil, etc. and ask “how many hours of human labor did it take to make a pencil this year?” This might be hard for diverse task categories like writing a new piece of software though.
What would a plausible capabilities timeline look like, such that we could mark off progress against it?
Rather than replacing jobs in order of the IQ of humans that typically end up doing them (the naive anthropocentric view of “robots getting smarter”), what actually seems to be happening is that AI and robotics develop capabilities for only part of a job at a time, but they do it cheap and fast, and so there’s an incentive for companies/professions to restructure to take advantage of AI. Progressions of jobs eliminated is therefore going to be weird and sometimes ill-defined. So it’s probably better to try to make a timeline of capabilities, rather than a timeline of doable jobs.
Actually, this probably requires brainstorming from people more in-touch with machine learning than me. But for starters, human-level performance on all current quantifiable benchmarks (from Allen Institute’s benchmark of primary-school test questions [easy?] to Mine-RL BASALT [hard?]) would be very impressive.
I think it’s useful to talk about job displacement as well, even if it’s partial rather than full. We’ve talked about job displacement due to automation (most of which is unrelated to AI) for centuries, and it seems useful to me. It doesn’t assume that machines (e.g. AI) are solving tasks in the same way as humans would do; only that they reduce the need for human labour. Though I guess it depends on what you want to do—for some purposes, it may be more useful to look at AI capabilities regarding more specific tasks.
That’s a good point. I’m a little worried that coarse-grained metrics like “% unemployment” or “average productivity of labor vs. capital” could fail to track AI progress if AI increases the productivity of labor. But we could pick specific tasks like making a pencil, etc. and ask “how many hours of human labor did it take to make a pencil this year?” This might be hard for diverse task categories like writing a new piece of software though.