My main point here is that if we think increased compute and processing will be valuable to AI researchers, which MacAskill and Moorhouse argue will be the case because of ability to analyse existing data, perform computational experiements, etc, then we should also think that such improvements will also be valuable to human researchers. Indeed, if AI becomes so valuable in its own right, I would also expect AI tools to augment human researcher capabilities. This is one reason why I don’t think its very meaningful to assume that the number of AI-equivalent researchers will increase very rapidly while the number of human researchers only grows a few percent per year. In my view we should be adjusting for the capabilities of human researchers as well as for AI-equivalent researchers.
As for what the EA community should do, like I’ve been saying for years I think there should be more diversity in thought, projects, orgs, etc, particularly in terms of support and attention given by thought leaders. I find there is surprisingly little diversity of thought about AI safety in particular, and the EA community could do a lot better in fostering more diverse research and discussion on this important issue.
The main unremovable advantages of AIs over humans will probably be in the following 2 areas:
A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.
The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it’s uncertain whether you could add more compute/learning capability without slowing down their doubling time.)
This is the mechanism by which you can get way more AI researchers very fast, while human researchers don’t increase proportionally.
Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.
Hey Vanessa!
My main point here is that if we think increased compute and processing will be valuable to AI researchers, which MacAskill and Moorhouse argue will be the case because of ability to analyse existing data, perform computational experiements, etc, then we should also think that such improvements will also be valuable to human researchers. Indeed, if AI becomes so valuable in its own right, I would also expect AI tools to augment human researcher capabilities. This is one reason why I don’t think its very meaningful to assume that the number of AI-equivalent researchers will increase very rapidly while the number of human researchers only grows a few percent per year. In my view we should be adjusting for the capabilities of human researchers as well as for AI-equivalent researchers.
As for what the EA community should do, like I’ve been saying for years I think there should be more diversity in thought, projects, orgs, etc, particularly in terms of support and attention given by thought leaders. I find there is surprisingly little diversity of thought about AI safety in particular, and the EA community could do a lot better in fostering more diverse research and discussion on this important issue.
The main unremovable advantages of AIs over humans will probably be in the following 2 areas:
A serial speed advantage, from 50-1000x, with my median in the 100-500x speed advantage range, and more generally the ability to run slower or faster to do more work proportionally, albeit there are tradeoffs at either extreme of either running slow or fast.
The ability for compute/software improvements to directly convert into more researchers with essentially 0 serial time necessary, unlike basically all of reproduction (about the only cases where it even gets close are the days/hours doubling time of flies and some bacteria/viruses, but these are doing much simpler jobs and it’s uncertain whether you could add more compute/learning capability without slowing down their doubling time.)
This is the mechanism by which you can get way more AI researchers very fast, while human researchers don’t increase proportionally.
Humans probably do benefit assuming AI is useful enough to automate say AI research away, but these 2 unremovable limitations fundamentally prevent anything like an explosion in research, unlike AI research.