Is there a risk that Mustafa’s company could speed up the race towards dangerous capabilities?
Disheartening to a hear a pretty weak answer to this critical question. Analysis of his answer:
First, I think the primary threat to the stability of the nation-state is not the existence of these models themselves, or indeed the existence of these models with the capabilities that I mentioned. The primary threat to the nation-state is the proliferation of power.
I’m really not sure what this means and surprised Rob didn’t follow up on this. I think he must mean that they won’t be open sourcing the weights, which is certainly good. However, it’s unclear how much this matters if the model is available to call from an API. The argument may be that other actors can’t fine-tune the model to remove guardrails, which they have put in place to make the model completely safe. I was impressed to hear his claim about jailbreaks later on:
It isn’t susceptible to any of the jailbreaks or prompt hacks, any of them. If anybody gets one, send it to me on Twitter.
Although strangely he also said:
it doesn’t generate code;
Which is trivial to disprove, so I’m not sure what he meant by that. Regardless, I think that providing API access to a model distributes a lot of the “power” of the model to everyone in the world.
I’m not in the AGI intelligence explosion camp that thinks that just by developing models with these capabilities, suddenly it gets out of the box, deceives us, persuades us to go and get access to more resources, gets to inadvertently update its own goals.
There hasn’t ever been any very solid rebuttal of the intelligence explosion argument. It mostly gets dismissed of the basis of sounding like sci-fi. You can make a good argument that dangerous capabilities will emerge before we reach this point, and we may have a “slow take-off” in that sense. However, it seems to me that we should expect recursive self-improvement to happen eventually because there is no fundamental reason why it isn’t possible and it would clearly be useful for achieving any task. So the question is whether it will start before or after TAI. It’s pretty clear that no one knows the answer to this question so it’s absurd to be gambling the future of humanity on this point.
Me not participating certainly doesn’t reduce the likelihood that these models get developed.
The AI race currently consists of a small handful of companies. A CEO who was actually trying to minimize the risk of extinction would at least attempt to coordinate a deceleration between these 4 or 5 actors before dismissing this as a hopeless tragedy of the commons.
Disheartening to a hear a pretty weak answer to this critical question. Analysis of his answer:
I’m really not sure what this means and surprised Rob didn’t follow up on this. I think he must mean that they won’t be open sourcing the weights, which is certainly good. However, it’s unclear how much this matters if the model is available to call from an API. The argument may be that other actors can’t fine-tune the model to remove guardrails, which they have put in place to make the model completely safe. I was impressed to hear his claim about jailbreaks later on:
Although strangely he also said:
Which is trivial to disprove, so I’m not sure what he meant by that. Regardless, I think that providing API access to a model distributes a lot of the “power” of the model to everyone in the world.
There hasn’t ever been any very solid rebuttal of the intelligence explosion argument. It mostly gets dismissed of the basis of sounding like sci-fi. You can make a good argument that dangerous capabilities will emerge before we reach this point, and we may have a “slow take-off” in that sense. However, it seems to me that we should expect recursive self-improvement to happen eventually because there is no fundamental reason why it isn’t possible and it would clearly be useful for achieving any task. So the question is whether it will start before or after TAI. It’s pretty clear that no one knows the answer to this question so it’s absurd to be gambling the future of humanity on this point.
The AI race currently consists of a small handful of companies. A CEO who was actually trying to minimize the risk of extinction would at least attempt to coordinate a deceleration between these 4 or 5 actors before dismissing this as a hopeless tragedy of the commons.
“I’m really not sure what this means and surprised Rob didn’t follow up on this.”
Just the short time constraint. Sometimes I have to I trust the audience to assess for themselves whether or not they find an answer convincing.