Looking over that comment, I realize I don’t think I’ve seen anybody else use the term “secret sauce theory”, but I like it. We should totally use that term going forward. :)
I’m not sure if the secret sauce adds anything re doomerism. Many non-doomers are arguably non-doomers because they think the secret sauce makes the AI humanlike enough that things will be fine by default—the AI will automatically be moral, “do the right thing”, “see reason”, or “clearly something intelligent would realise killing everyone to make paperclips is stupid” or something (and I think this kind of not-applying-the-Copernican-Revolution-to-mindspace is really dangerous).
I don’t think “secret sauce” is a necessary ingredient for the “doomer” view. Indeed, Connor Leahy is so worried precisely because he thinks that there is no secret sauce left (see reference to “General Cognition Engines” here)! I’m also now in this camp, and think, post-GPT-4, there is reason to freak out because all that is basically needed is more data and compute (money) to get to AGI, and the default outcome of AGI is doom.
Fair. I suppose there are actually two paths to being a doomer (usually): secret sauce theory or extremely short timelines.
Looking over that comment, I realize I don’t think I’ve seen anybody else use the term “secret sauce theory”, but I like it. We should totally use that term going forward. :)
I’m not sure if the secret sauce adds anything re doomerism. Many non-doomers are arguably non-doomers because they think the secret sauce makes the AI humanlike enough that things will be fine by default—the AI will automatically be moral, “do the right thing”, “see reason”, or “clearly something intelligent would realise killing everyone to make paperclips is stupid” or something (and I think this kind of not-applying-the-Copernican-Revolution-to-mindspace is really dangerous).