A mind upload of a flat-Earther might not stay a flat-Earther for long if they are given access to all the world’s sensors (including orbiting cameras), and had 1000s of subjective years to think every month.
North Korean hackers have stolen billions of dollars. Imagine if there were a million times more of them. And that is mere human-level we’re talking about.
How do you get it aligned enough to not want to commit global genocide? Sounds like you’ve solved 99% of the alignment problem if you can do that!
I thought GPT-4 had learned the rules of chess? Pretty impressive for just being trained on text (show’s it has emergent internal world models).
I think you can assume that the architecture is basically “foundation transformer model / LLM” at this point. As Connor Leahy says, they are basically “general cognition engines” and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid “System 2” type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). We may or may not get such warning shots (look out for Google DeepMind’s next multimodal model I guess..)
I don’t think there’s as much of a gulf as there appears on the face of it. I think you are anthropomorphising to think that it will care about scale of harms in such a way (without being perfectly aligned). See also: Mesa-optimisation leading to value drift. The AI needs to be aligned on not committing global genocide indefinitely.
Hope so! And hope we don’t need any (more) lethal warning shots for it to happen. My worry is that we have very little time to get the regulation in place (hence working to try and speed that up).
I’m not sure, but I’m sure enough to be really concerned! What about the current architecture (“general cognition engine”) + AutoGPT + plugins, isn’t enough? And 100x GPT-4 of compute costs <1% of Google or Microsoft’s market capitalisation.
Even if you don’t think x-risk is likely, if you think a global catastrophe still is, I hope you can get behind calls for regulation.
A mind upload of a flat-Earther might not stay a flat-Earther for long if they are given access to all the world’s sensors (including orbiting cameras), and had 1000s of subjective years to think every month.
North Korean hackers have stolen billions of dollars. Imagine if there were a million times more of them. And that is mere human-level we’re talking about.
How do you get it aligned enough to not want to commit global genocide? Sounds like you’ve solved 99% of the alignment problem if you can do that!
I thought GPT-4 had learned the rules of chess? Pretty impressive for just being trained on text (show’s it has emergent internal world models).
I think you can assume that the architecture is basically “foundation transformer model / LLM” at this point. As Connor Leahy says, they are basically “general cognition engines” and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid “System 2” type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). We may or may not get such warning shots (look out for Google DeepMind’s next multimodal model I guess..)
I don’t think there’s as much of a gulf as there appears on the face of it. I think you are anthropomorphising to think that it will care about scale of harms in such a way (without being perfectly aligned). See also: Mesa-optimisation leading to value drift. The AI needs to be aligned on not committing global genocide indefinitely.
Hope so! And hope we don’t need any (more) lethal warning shots for it to happen. My worry is that we have very little time to get the regulation in place (hence working to try and speed that up).
I’m not sure, but I’m sure enough to be really concerned! What about the current architecture (“general cognition engine”) + AutoGPT + plugins, isn’t enough? And 100x GPT-4 of compute costs <1% of Google or Microsoft’s market capitalisation.
Even if you don’t think x-risk is likely, if you think a global catastrophe still is, I hope you can get behind calls for regulation.