I’m fairly sure deep learning alone will not result in AGI
How sure? :)
What about some combination of deep learning (e.g. massive self-supervised) + within-context/episodic memory/state + procedurally-generated tasks + large-scale population-based training + self-play...? I’m just naming a few contemporary ‘prosaic’ practices which, to me, seem plausibly-enough sufficient to produce AGI that it warrants attention.
Like I said it is based on my gut feeling, but fairly sure.
Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.
Also, do you happen to know why “prosaic” practices are called “prosaic”? I have never understood the connection to the dictionary definition of “prosaic”.
How sure? :)
What about some combination of deep learning (e.g. massive self-supervised) + within-context/episodic memory/state + procedurally-generated tasks + large-scale population-based training + self-play...? I’m just naming a few contemporary ‘prosaic’ practices which, to me, seem plausibly-enough sufficient to produce AGI that it warrants attention.
Like I said it is based on my gut feeling, but fairly sure.
Is it your experience that adding more complexity and concatenating different ML models results to better quality and generality and if so, in what domains? I would have the opposite intuition especially in NLP.
Also, do you happen to know why “prosaic” practices are called “prosaic”? I have never understood the connection to the dictionary definition of “prosaic”.