Thanks James, interesting post!
A minor question: where you say the following,
MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.
do you think human researchers’ access to compute and other productivity enhancements would have a significant impact on their research capacity? It’s not obvious to me how bottlenecked human researchers are by these factors, whereas they seem much more critical to “AI researchers”.
More generally, are there things you would like to see the EA community do differently, if it placed more weight on longer AI timelines? It seems to me that even if we think short timelines are a bit likely, we should probably put quite a lot of resources towards things that can have an impact in the short term.
I think about this question regarding Australia sometimes :) I asked about this at the “Australia’s AI crossroads” event recently, i.e. whether / how Australia in particular could contribute on AI. I guess some of these things might also be relevant in NZ to some extent. Here’s what the speakers said (there’s also a recording):
Not hosting one of the major AI labs means we can be a trusted third party in negotiations
Relatedly I guess we also have history / relations with the west and with China
History working on catastrophic risks with the Canberra Commission (although this was 30 years ago so it’s not obvious to me that it really means anything)
Apparently Australia is known for rolling out trustworthy regulatory software (I don’t know if this is true). The person who said this thinks we have the potential to be world-leading in trustworthy AI if we get a good regulatory framework in place
I’m not sure I find these suggestions very compelling but thought they were interesting nonetheless!