I think it’s pretty clear now that the default trajectory of AI development is taking us towards pretty much exactly the sorts of agentic AGI that MIRI et al were worried about 11 years ago. We are not heading towards a world of AI tools by default; coordination is needed to not build agents.
If in 5 more years the state of the art, most-AGI-ish systems are still basically autocomplete, not capable of taking long series of action-input-action-input-etc. with humans out of the loop, not capable of online learning, and this had nothing to do with humans coordinating to slow down progress towards agentic AGI, I’ll count myself as having been very wrong and very surprised.
I think it’s pretty clear now that the default trajectory of AI development is taking us towards pretty much exactly the sorts of agentic AGI that MIRI et al were worried about 11 years ago. We are not heading towards a world of AI tools by default; coordination is needed to not build agents.
If in 5 more years the state of the art, most-AGI-ish systems are still basically autocomplete, not capable of taking long series of action-input-action-input-etc. with humans out of the loop, not capable of online learning, and this had nothing to do with humans coordinating to slow down progress towards agentic AGI, I’ll count myself as having been very wrong and very surprised.