Imo, the main risk from working on AI capabilities is moving forward the frontier of research, and so basically the work of the main AI labs (or possible new big AI labs to come). I could easily imagine the average AI capabilities work actually slowing things down by distracting people with work that doesn’t move the frontier forward. OTOH, it could bring more people into AI generally, and so more of them working near the frontier.
So, I would judge this specific project on that basis, and a prior for the average project may not be very informative.
One main question: could it allow AI researchers at the big labs do research faster? Maybe? Seems like it could help them automate part of their own work. Would they actually use it, or some other project influenced by it? I’m not sufficiently well-informed to guess.
Or, maybe even if it doesn’t move forward the frontier of research, it will affect how likely dangerous AI is to be deployed.
This is a great point, thanks. Part of me thinks basically any work that increases AI capabilities probably accelerates AI timelines. But it seems plausible to me that advancing the frontier of research accelerates AI timelines much more than other work that merely increases AI capabilities, and that most of this frontier work is done at major AI labs.
If that’s the case, then I think you’re right that my using a prior for the average project to judge this specific project (as I did in the post) is not informative.
It would also mean we could tell a story about how this ML engineer hacking on their own side project rather than going to work at one of the main AI labs indeed would be a net positive due to the former accelerating AI timelines much less than the latter.
Whether it makes sense to look at the situation like this though might depend on whether that actually is the counterfactual, or whether the counterfactual is not increasing AI capabilities or the frontier of research or AI timelines at all.
The ML engineer is developing an automation technology for coding and is aware of AI risks . The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.
Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead’s realization that you don’t need humans to cut rubylith film to form each transistor.
You haven’t shown an argument that this project will accelerate the scenario you describe. Perhaps the engineer is brushing you off because your reasoning is broad enough to apply to all improvements in computing technology. You will get more traction if you can show more specifically how this project is “bad for the world”.
Thanks for the response and for the concern. To be clear, the purpose of this post was to explore how much a typical, small AI project would affect AI timelines and AI risk in expectation. It was not intended as a response to the ML engineer, and as such I did not send it or any of its contents to him, nor comment on the quoted thread. I understand how inappropriate it would be to reply to the engineer’s polite acknowledgment of the concerns with my long analysis of how many additional people will die in expectation due to the project accelerating AI timelines.
I also refrained from linking to the quoted thread specifically because again this post is not a contribution to that discussion. The thread merely inspired me to take a quantitative look at what the expected impacts of a typical ML project actually are. I included the details of the project for context in case others wanted to take them into account when forecasting the impact.
I also included Jim and Raymond’s comments because this post takes their claims as givens. While I understand the ML engineer may have been skeptical of their claims, and so elaborating on why the project is expected to accelerate AI timelines (and therefore increase AI risk) would be necessary to persuade them that their project is bad for the world, again that aim is outside of the scope of this post.
I’ve edited the heading after “The trigger for this post” from “My response” to “My thoughts on whether small ML projects significantly affect AI timelines” to make clear that the contents are not intended as a response to the ML engineer, but rather are just my thoughts about the claim made by the ML engineer. I assume that heading is what led you to interpret this post as a response to the ML engineer, but if there’s anything else that led you to interpret it that way, I’d appreciate you letting me know so I can improve it for others who might read it. Thanks again for reading and offering your thoughts.
Imo, the main risk from working on AI capabilities is moving forward the frontier of research, and so basically the work of the main AI labs (or possible new big AI labs to come). I could easily imagine the average AI capabilities work actually slowing things down by distracting people with work that doesn’t move the frontier forward. OTOH, it could bring more people into AI generally, and so more of them working near the frontier.
So, I would judge this specific project on that basis, and a prior for the average project may not be very informative.
One main question: could it allow AI researchers at the big labs do research faster? Maybe? Seems like it could help them automate part of their own work. Would they actually use it, or some other project influenced by it? I’m not sufficiently well-informed to guess.
Or, maybe even if it doesn’t move forward the frontier of research, it will affect how likely dangerous AI is to be deployed.
This is a great point, thanks. Part of me thinks basically any work that increases AI capabilities probably accelerates AI timelines. But it seems plausible to me that advancing the frontier of research accelerates AI timelines much more than other work that merely increases AI capabilities, and that most of this frontier work is done at major AI labs.
If that’s the case, then I think you’re right that my using a prior for the average project to judge this specific project (as I did in the post) is not informative.
It would also mean we could tell a story about how this ML engineer hacking on their own side project rather than going to work at one of the main AI labs indeed would be a net positive due to the former accelerating AI timelines much less than the latter.
Whether it makes sense to look at the situation like this though might depend on whether that actually is the counterfactual, or whether the counterfactual is not increasing AI capabilities or the frontier of research or AI timelines at all.
The ML engineer is developing an automation technology for coding and is aware of AI risks . The engineers polite acknowledgment of the concerns is met with your long derivation of how many current and future people she will kill with this.
Automating an aspect of coding is part of a long history of using computers to help design better computers, starting with Carver Mead’s realization that you don’t need humans to cut rubylith film to form each transistor.
You haven’t shown an argument that this project will accelerate the scenario you describe. Perhaps the engineer is brushing you off because your reasoning is broad enough to apply to all improvements in computing technology. You will get more traction if you can show more specifically how this project is “bad for the world”.
I replied on LW: