So for clarity I’m much closer to your position than the ctrl.ai position, and very much agree with your concerns.
But I think, from their perspective, the major AI labs are already defecting by scaling up models that are inherently unsafe despite knowing that this has a significant chance of wiping out humanity (my understanding of ctrl.ai, not my own opinion[1])
I’m going to write a response to Connor’s main post and link to it here that might help explain where their perspective is coming from (based on my own interpretation) [update:my comment is here, which is my attempt to communicate what the ctrl.ai position is, or at least where their scepticism of RSP’s has come from]
So for clarity I’m much closer to your position than the ctrl.ai position, and very much agree with your concerns.
But I think, from their perspective, the major AI labs are already defecting by scaling up models that are inherently unsafe despite knowing that this has a significant chance of wiping out humanity (my understanding of ctrl.ai, not my own opinion[1])
I’m going to write a response to Connor’s main post and link to it here that might help explain where their perspective is coming from (based on my own interpretation) [update: my comment is here, which is my attempt to communicate what the ctrl.ai position is, or at least where their scepticism of RSP’s has come from]
fwiw my opinion is here