I also don’t like this post and I’ve deleted most of it. But I do feel like this is quite important and someone needs to say it.
Joseph Miller
Dear Anthropic people, please don’t release Claude
What is the purpose of publicly deploying Claude? It seems like this will only have the effect of increasing arms race dynamics. If the reason is just to fund further safety research, then I think this is worth saying explicitly.
First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).
Could you point more specifically to what progress you think has been made? As this research area seems to have only existed since 2021 we can’t have yet made successful predictions about future values so I’m curious what has been achieved.
Related: Advantages of Cutting Your Salary
In all seriousness, I think this is a good point
Is there a risk that Mustafa’s company could speed up the race towards dangerous capabilities?
Disheartening to a hear a pretty weak answer to this critical question. Analysis of his answer:
First, I think the primary threat to the stability of the nation-state is not the existence of these models themselves, or indeed the existence of these models with the capabilities that I mentioned. The primary threat to the nation-state is the proliferation of power.
I’m really not sure what this means and surprised Rob didn’t follow up on this. I think he must mean that they won’t be open sourcing the weights, which is certainly good. However, it’s unclear how much this matters if the model is available to call from an API. The argument may be that other actors can’t fine-tune the model to remove guardrails, which they have put in place to make the model completely safe. I was impressed to hear his claim about jailbreaks later on:
It isn’t susceptible to any of the jailbreaks or prompt hacks, any of them. If anybody gets one, send it to me on Twitter.
Although strangely he also said:
it doesn’t generate code;
Which is trivial to disprove, so I’m not sure what he meant by that. Regardless, I think that providing API access to a model distributes a lot of the “power” of the model to everyone in the world.
I’m not in the AGI intelligence explosion camp that thinks that just by developing models with these capabilities, suddenly it gets out of the box, deceives us, persuades us to go and get access to more resources, gets to inadvertently update its own goals.
There hasn’t ever been any very solid rebuttal of the intelligence explosion argument. It mostly gets dismissed of the basis of sounding like sci-fi. You can make a good argument that dangerous capabilities will emerge before we reach this point, and we may have a “slow take-off” in that sense. However, it seems to me that we should expect recursive self-improvement to happen eventually because there is no fundamental reason why it isn’t possible and it would clearly be useful for achieving any task. So the question is whether it will start before or after TAI. It’s pretty clear that no one knows the answer to this question so it’s absurd to be gambling the future of humanity on this point.
Me not participating certainly doesn’t reduce the likelihood that these models get developed.
The AI race currently consists of a small handful of companies. A CEO who was actually trying to minimize the risk of extinction would at least attempt to coordinate a deceleration between these 4 or 5 actors before dismissing this as a hopeless tragedy of the commons.
The International PauseAI Protest: Activism under uncertainty
We use evidence-based outreach to inform people of the threats that advanced AI poses to their economic livelihoods and personal safety (HOW). Our mission is to create a united front for humanity, driving national and international coordination on robust solutions to AI-driven disempowerment (WHAT).
I’m not sure if this was the aim of the mission statement, but after reading this I still do not know what StakeOut.AI does in a concrete way.
Specifically, we are looking to use cost-effective Internet messaging tools to communicate the evidence that disempowering AI poses serious dangers (to economic livelihoods and personal safety) for people of every industry, for people of every country, and for humanity as a whole.
Thanks for clarifying, I can see why you’d want to make your mission statement broad enough to encompass future activity.
What “cost-effective Internet messaging tools” do you imagine you will be using in the near future?
I mean something like “the scenario where there is no pause and also no other development that currently seems very unlikely and changes the level of risk dramatically (eg. a massive breakthrough in human brain emulation next year).”
Was there some blocker that caused this to happen now, rather than 6 months / 1 year ago?
People are clearly using agree / disagree voting wrong. What does it mean to agree vote a question?
Anthropic Announces new S.O.T.A. Claude 3
Yes, I think this is a reasonable response. However, it seems to rest on the assumption that just trying a bit harder at safety makes a meaningful difference. If Alignment is very hard then Anthropic’s AIs are just as likely to kill everyone as other labs’. It seems very unclear whether having “safety conscious” people at the helm will make any difference to our chance of survival, especially when they are almost always forced to make the exact same decisions as people who are not safety conscious in order to stay at the helm.
Even if they are right that it is important to stay in the race, what Anthropic should be doing is
Calling for governments to enforce a worldwide Pause such that they can stop racing towards Superintelligence without worry about other labs getting ahead.
Trying to agree with other labs to decelerate race dynamics.
Warning politicians and the public that automation of all office jobs may be just around the corner.
Setting out their views as to how politics works in a world with superintelligence.
Declaring in advance what would compel them to consider AIs as moral patients.
All of which they could do while continuing to compete in the race. RSPs are nice, but not sufficient.
It’s also worth remembering that this is advertising. Claiming to be a little bit better on some cherry picked metrics a year after GPT-4 was released is hardly a major accelerant in the overall AI race.
Fair point. On the other hand, the perception is in many ways more important than the actual capability in terms of incentivizing competitors to race faster.
Also based on early user reports it seems to actually be noticably better than GPT-4.
EA promoted earning to give When the movement largely moved away from it, not enough work was done to make that distance
Why would we want to do that? Earning to give is a good way to help the world. Maybe not the best, but still good.
Why I’m doing PauseAI
What is the risk level below which you’d be OK with unpausing AI?
I think approximately 1 in 10,000 chance of extinction for each new GPT would be acceptable given the benefits of AI. This is approximately my guess for GPT-5, so if we could release that model and then pause, I’d be okay with that.
A major consideration here is the use of AI to mitigate other x-risks. Some of Toby Ord’s x-risk estimates:
AI − 1 in 10
Engineering Pandemic − 1 in 30
Unforeseen anthropogenic risks (eg. dystopian regime, nanotech) − 1 in 30
Other anthropogenic risks − 1 in 50
Nuclear war − 1 in 1000
Climate change − 1 in 1000
Other environmental damage 1 in 1000
Supervolcano − 1 in 10,000
If there was a concrete plan under which AI could be used to mitigate pandemics and anthropogenic risks, then I would be ok with a higher probability of AI extinction, but it seems more likely that AI progress would increase these risks before it decreased them.
AI could be helpful for climate change and eventually nuclear war. So maybe I should be willing to go a little higher on the risk. But we might need a few more GPTs to fix these problems and if each new GPT is 1 in 10,000 then it starts to even out.
What do you think about the potential benefits from AI?
I’m very bullish about the benefits of an aligned AGI. Besides mitigating x-risk, I think curing aging should be a top priority and is worth taking some risks to obtain.
How do you interpret models of AI pause, such as this one from Chad Jones?
I’ve read the post quickly, but I don’t have a background in economics, so it would take me a while to fully absorb. My first impression is that it is interesting but not that useful for making decisions right now. The simplifications required by the model offset the gains in rigor. What do you think? Is it something I should take the time to understand?
My guess would be that the discount rate is pretty cruxy. Intuitively I would expect almost any gains over the next 1000 years to be offset by reductions in x-risk since we could have zillions of years to reap the benefits. (On a meta-level I believe moral questions are not “truthy” so this is just according to my vaguely total utilitarian preferences, not some deeper truth).
There’s a crux which is very important. If you only want to attend protests where the protesters are reasonable and well informed and agree with you, then you implicitly only want to attend small protests.
It seems pretty clear to me that most people are much less concerned about x-risk than job loss and other concerns. So we have to make a decision—do we stick to our guns and have the most epistemically virtuous protest movement in history and make it 10x harder to recruit new people and grow the moment? Or do we compromise and welcome people with many concerns, form alliances with groups we don’t agree with in order to have a large and impactful movement?
It would be a failure of instrumental rationality to demand the former. This is just a basic reality about solving coordination problems.
[To provide a counter argument: having a big movement that doesn’t understand the problem is not useful. At some point the misalignment between the movement and the true objective will be catastrophic.
I don’t really buy this because I think that pausing is a big and stable enough target and it is a good solution for most concerns.]
This is something I am actually quite uncertain about so I would like to hear your opinion.
Where in Cambridge will this take place (accommodation / venue)?
Is compensation for both students and mentors?
Will you provide/subsidize access to GPUs?