Continual investment argument for why AGI will probably happen, absent major societal catastrophes, written informally, for my notes:
We’ve been working on AI since ~1950s, in an era of history that feels normal to us but in fact develops technologies very very very fast compared to most of human existence. In 2012, the deep learning revolution of AI started with AlexNet and GPUs. Deep learning has made progress even faster than the current very fast rate of progress: 10 years later, we have unprecedented and unpredicted progress in large language model systems like GPT-3, which have unusual emergent capabilities (text generation, translation, coding, math) for being trained on the next token of a language sequence. One can imagine that if we continue to pour in resources like training data and compute (as many companies are), continue to see algorithmic improvements at the rate we’ve seen, and continue to see hardware improvements (e.g. optical computing), then maybe humanity develops something like AGI or AI at very high levels of capabilities.
Even if we don’t see this progress with deep learning and need a paradigm shift, there’s still an immense amount of human investment being poured into AI in terms of talent, money from private investors + government + company profits, and resources. There’s international competition to develop AI fast, there’s immense economic incentives to make AI products that make our lives ever more convenient along with other benefits, and some of the leading companies (DeepMind, OpenAI) are explicitly aiming at AGI. Given that we’ve only been working on AI since the 1950s, and the major recent progress has been in the last 10 years, and the pace of technological innovation seems very fast or accelerating with worldwide investment, it seems likely we will alive at advanced AI someday, and that someday could be well within our lifetimes, pending major societal disruption.
Continual investment argument for why AGI will probably happen, absent major societal catastrophes, written informally, for my notes:
We’ve been working on AI since ~1950s, in an era of history that feels normal to us but in fact develops technologies very very very fast compared to most of human existence. In 2012, the deep learning revolution of AI started with AlexNet and GPUs. Deep learning has made progress even faster than the current very fast rate of progress: 10 years later, we have unprecedented and unpredicted progress in large language model systems like GPT-3, which have unusual emergent capabilities (text generation, translation, coding, math) for being trained on the next token of a language sequence. One can imagine that if we continue to pour in resources like training data and compute (as many companies are), continue to see algorithmic improvements at the rate we’ve seen, and continue to see hardware improvements (e.g. optical computing), then maybe humanity develops something like AGI or AI at very high levels of capabilities.
Even if we don’t see this progress with deep learning and need a paradigm shift, there’s still an immense amount of human investment being poured into AI in terms of talent, money from private investors + government + company profits, and resources. There’s international competition to develop AI fast, there’s immense economic incentives to make AI products that make our lives ever more convenient along with other benefits, and some of the leading companies (DeepMind, OpenAI) are explicitly aiming at AGI. Given that we’ve only been working on AI since the 1950s, and the major recent progress has been in the last 10 years, and the pace of technological innovation seems very fast or accelerating with worldwide investment, it seems likely we will alive at advanced AI someday, and that someday could be well within our lifetimes, pending major societal disruption.