I have read the alignment problem, the first few chapters of Superintelligence, seen one or two Rob Miles videos. My question is more the second one; I agree that technically GPT-3 already has a goal / utility function (to find the most highly predicted token, roughly), but it’s not an ‘interesting’ goal in that it doesn’t imply doing anything in the world.
I have read the alignment problem, the first few chapters of Superintelligence, seen one or two Rob Miles videos. My question is more the second one; I agree that technically GPT-3 already has a goal / utility function (to find the most highly predicted token, roughly), but it’s not an ‘interesting’ goal in that it doesn’t imply doing anything in the world.