Imo, the main risk from working on AI capabilities is moving forward the frontier of research, and so basically the work of the main AI labs (or possible new big AI labs to come). I could easily imagine the average AI capabilities work actually slowing things down by distracting people with work that doesn’t move the frontier forward. OTOH, it could bring more people into AI generally, and so more of them working near the frontier.
So, I would judge this specific project on that basis, and a prior for the average project may not be very informative.
One main question: could it allow AI researchers at the big labs do research faster? Maybe? Seems like it could help them automate part of their own work. Would they actually use it, or some other project influenced by it? I’m not sufficiently well-informed to guess.
Or, maybe even if it doesn’t move forward the frontier of research, it will affect how likely dangerous AI is to be deployed.
This is a great point, thanks. Part of me thinks basically any work that increases AI capabilities probably accelerates AI timelines. But it seems plausible to me that advancing the frontier of research accelerates AI timelines much more than other work that merely increases AI capabilities, and that most of this frontier work is done at major AI labs.
If that’s the case, then I think you’re right that my using a prior for the average project to judge this specific project (as I did in the post) is not informative.
It would also mean we could tell a story about how this ML engineer hacking on their own side project rather than going to work at one of the main AI labs indeed would be a net positive due to the former accelerating AI timelines much less than the latter.
Whether it makes sense to look at the situation like this though might depend on whether that actually is the counterfactual, or whether the counterfactual is not increasing AI capabilities or the frontier of research or AI timelines at all.
Imo, the main risk from working on AI capabilities is moving forward the frontier of research, and so basically the work of the main AI labs (or possible new big AI labs to come). I could easily imagine the average AI capabilities work actually slowing things down by distracting people with work that doesn’t move the frontier forward. OTOH, it could bring more people into AI generally, and so more of them working near the frontier.
So, I would judge this specific project on that basis, and a prior for the average project may not be very informative.
One main question: could it allow AI researchers at the big labs do research faster? Maybe? Seems like it could help them automate part of their own work. Would they actually use it, or some other project influenced by it? I’m not sufficiently well-informed to guess.
Or, maybe even if it doesn’t move forward the frontier of research, it will affect how likely dangerous AI is to be deployed.
This is a great point, thanks. Part of me thinks basically any work that increases AI capabilities probably accelerates AI timelines. But it seems plausible to me that advancing the frontier of research accelerates AI timelines much more than other work that merely increases AI capabilities, and that most of this frontier work is done at major AI labs.
If that’s the case, then I think you’re right that my using a prior for the average project to judge this specific project (as I did in the post) is not informative.
It would also mean we could tell a story about how this ML engineer hacking on their own side project rather than going to work at one of the main AI labs indeed would be a net positive due to the former accelerating AI timelines much less than the latter.
Whether it makes sense to look at the situation like this though might depend on whether that actually is the counterfactual, or whether the counterfactual is not increasing AI capabilities or the frontier of research or AI timelines at all.