Hi! Thanks for this post. What you are describing matches my understanding of Prosaic AGI, where no significant technical breakthrough is needed to get to safety-relevant capabilities.
Discussion of the implications of scaling large language models is a thing, and your input would be very welcome!
On the title of your post: the hard left turn term is left undefined, I assume that’s a reference to Soares’s sharp left turn.
Hi! Thanks for this post. What you are describing matches my understanding of Prosaic AGI, where no significant technical breakthrough is needed to get to safety-relevant capabilities.
Discussion of the implications of scaling large language models is a thing, and your input would be very welcome!
On the title of your post: the hard left turn term is left undefined, I assume that’s a reference to Soares’s sharp left turn.