Looking over that comment, I realize I don’t think I’ve seen anybody else use the term “secret sauce theory”, but I like it. We should totally use that term going forward. :)
I’m not sure if the secret sauce adds anything re doomerism. Many non-doomers are arguably non-doomers because they think the secret sauce makes the AI humanlike enough that things will be fine by default—the AI will automatically be moral, “do the right thing”, “see reason”, or “clearly something intelligent would realise killing everyone to make paperclips is stupid” or something (and I think this kind of not-applying-the-Copernican-Revolution-to-mindspace is really dangerous).
Fair. I suppose there are actually two paths to being a doomer (usually): secret sauce theory or extremely short timelines.
Looking over that comment, I realize I don’t think I’ve seen anybody else use the term “secret sauce theory”, but I like it. We should totally use that term going forward. :)
I’m not sure if the secret sauce adds anything re doomerism. Many non-doomers are arguably non-doomers because they think the secret sauce makes the AI humanlike enough that things will be fine by default—the AI will automatically be moral, “do the right thing”, “see reason”, or “clearly something intelligent would realise killing everyone to make paperclips is stupid” or something (and I think this kind of not-applying-the-Copernican-Revolution-to-mindspace is really dangerous).