MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.
do you think human researchers’ access to compute and other productivity enhancements would have a significant impact on their research capacity? It’s not obvious to me how bottlenecked human researchers are by these factors, whereas they seem much more critical to “AI researchers”.
More generally, are there things you would like to see the EA community do differently, if it placed more weight on longer AI timelines? It seems to me that even if we think short timelines are a bit likely, we should probably put quite a lot of resources towards things that can have an impact in the short term.
My main point here is that if we think increased compute and processing will be valuable to AI researchers, which MacAskill and Moorhouse argue will be the case because of ability to analyse existing data, perform computational experiements, etc, then we should also think that such improvements will also be valuable to human researchers. Indeed, if AI becomes so valuable in its own right, I would also expect AI tools to augment human researcher capabilities. This is one reason why I don’t think its very meaningful to assume that the number of AI-equivalent researchers will increase very rapidly while the number of human researchers only grows a few percent per year. In my view we should be adjusting for the capabilities of human researchers as well as for AI-equivalent researchers.
As for what the EA community should do, like I’ve been saying for years I think there should be more diversity in thought, projects, orgs, etc, particularly in terms of support and attention given by thought leaders. I find there is surprisingly little diversity of thought about AI safety in particular, and the EA community could do a lot better in fostering more diverse research and discussion on this important issue.
The main unremovable advantages of AIs over humans will probably be in the following 2 areas:
A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.
The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it’s uncertain whether you could add more compute/learning capability without slowing down their doubling time.)
This is the mechanism by which you can get way more AI researchers very fast, while human researchers don’t increase proportionally.
Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.
Thanks James, interesting post!
A minor question: where you say the following,
do you think human researchers’ access to compute and other productivity enhancements would have a significant impact on their research capacity? It’s not obvious to me how bottlenecked human researchers are by these factors, whereas they seem much more critical to “AI researchers”.
More generally, are there things you would like to see the EA community do differently, if it placed more weight on longer AI timelines? It seems to me that even if we think short timelines are a bit likely, we should probably put quite a lot of resources towards things that can have an impact in the short term.
Hey Vanessa!
My main point here is that if we think increased compute and processing will be valuable to AI researchers, which MacAskill and Moorhouse argue will be the case because of ability to analyse existing data, perform computational experiements, etc, then we should also think that such improvements will also be valuable to human researchers. Indeed, if AI becomes so valuable in its own right, I would also expect AI tools to augment human researcher capabilities. This is one reason why I don’t think its very meaningful to assume that the number of AI-equivalent researchers will increase very rapidly while the number of human researchers only grows a few percent per year. In my view we should be adjusting for the capabilities of human researchers as well as for AI-equivalent researchers.
As for what the EA community should do, like I’ve been saying for years I think there should be more diversity in thought, projects, orgs, etc, particularly in terms of support and attention given by thought leaders. I find there is surprisingly little diversity of thought about AI safety in particular, and the EA community could do a lot better in fostering more diverse research and discussion on this important issue.
The main unremovable advantages of AIs over humans will probably be in the following 2 areas:
A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.
The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it’s uncertain whether you could add more compute/learning capability without slowing down their doubling time.)
This is the mechanism by which you can get way more AI researchers very fast, while human researchers don’t increase proportionally.
Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.