Yeah I could definitely see it being sooner, but didn’t find any sources that convinced me it would be more likely in the next 10 years than later – what’s driving your shorter timelines?
I have a spreadsheet of different models and what timelines they imply, and how much weight I put on each model. The result is 18% by end of 2026. Then I consider various sources of evidence and update upwards to 38% by end of 2026. I think if it doesn’t happen by 2026 or so it’ll probably take a while longer, so my median is on 2040 or so.
The most highly weighted model in my spreadsheet takes compute to be the main driver of progress and uses a flat distribution over orders of magnitude of compute. Since it’s implausible that the flat distribution should extend more than 18 or so OOMs from where we are now, and since we are going to get 3-5 more OOM in the next five years, that yields 20%.
The biggest upward update from the bits of evidence comes from the trends embodied in transformers (e.g. GPT-3) and also to some extent in alphago, alphazero, muzero: Strip out all that human knowledge and specialized architecture, just make a fairly simple neural net and make it huge, and it does better and better the bigger you make it.
Another big update upward is… well, just read this comment. To me, this comment did not give me a new picture of what was going on but rather confirmed the picture I already had. The fact that it is so highly upvoted and so little objected to suggests that the same goes for lots of people in the community. Now there’s common knowledge.
Oh, and to answer your question for why it’s more likely shorter than later: Progress right now seems to be driven by compute, and in particular by buying greater and greater quantities of it. In a few years this trend MUST stop, because not even the US government would have enough money to continue the trend of spending an order of magnitude+ more each year. So if we haven’t got to crazy AI by 2026 or so, the current paradigm of “just add more compute” will no longer be so viable, and we’re back to waiting for new ideas to come along.
Gwern’s comment was really helpful to see the different paradigms, thanks for sharing! This reasoning makes sense to me in terms of increasing compute—I could see this pushing me slightly more towards shorter timelines, although I’d want to spend a lot longer researching this.
Yeah I could definitely see it being sooner, but didn’t find any sources that convinced me it would be more likely in the next 10 years than later – what’s driving your shorter timelines?
I have a spreadsheet of different models and what timelines they imply, and how much weight I put on each model. The result is 18% by end of 2026. Then I consider various sources of evidence and update upwards to 38% by end of 2026. I think if it doesn’t happen by 2026 or so it’ll probably take a while longer, so my median is on 2040 or so.
The most highly weighted model in my spreadsheet takes compute to be the main driver of progress and uses a flat distribution over orders of magnitude of compute. Since it’s implausible that the flat distribution should extend more than 18 or so OOMs from where we are now, and since we are going to get 3-5 more OOM in the next five years, that yields 20%.
The biggest upward update from the bits of evidence comes from the trends embodied in transformers (e.g. GPT-3) and also to some extent in alphago, alphazero, muzero: Strip out all that human knowledge and specialized architecture, just make a fairly simple neural net and make it huge, and it does better and better the bigger you make it.
Another big update upward is… well, just read this comment. To me, this comment did not give me a new picture of what was going on but rather confirmed the picture I already had. The fact that it is so highly upvoted and so little objected to suggests that the same goes for lots of people in the community. Now there’s common knowledge.
Oh, and to answer your question for why it’s more likely shorter than later: Progress right now seems to be driven by compute, and in particular by buying greater and greater quantities of it. In a few years this trend MUST stop, because not even the US government would have enough money to continue the trend of spending an order of magnitude+ more each year. So if we haven’t got to crazy AI by 2026 or so, the current paradigm of “just add more compute” will no longer be so viable, and we’re back to waiting for new ideas to come along.
Gwern’s comment was really helpful to see the different paradigms, thanks for sharing! This reasoning makes sense to me in terms of increasing compute—I could see this pushing me slightly more towards shorter timelines, although I’d want to spend a lot longer researching this.