We’re Not Ready: thoughts on “pausing” and responsible scaling policies
Views are my own, not Open Philanthropy’s. I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse.
Over the last few months, I’ve spent a lot of my time trying to help out with efforts to get responsible scaling policies adopted. In that context, a number of people have said it would be helpful for me to be publicly explicit about whether I’m in favor of an AI pause. This post will give some thoughts on these topics.
I think transformative AI could be soon, and we’re not ready
I have a strong default to thinking that scientific and technological progress is good and that worries will tend to be overblown. However, I think AI is a big exception here because of its potential for unprecedentedly rapid and radical transformation.1
I think sufficiently advanced AI would present enormous risks to the world. I’d put the risk of a world run by misaligned AI (or an outcome broadly similar to that) between 10-90% (so: above 10%) if it is developed relatively soon on something like today’s trajectory. And there are a whole host of other issues (e.g.) that could be just as important if not more so, that it seems like no one has really begun to get a handle on.
Is that level of AI coming soon, and could the world be “ready” in time? Here I want to flag that timelines to transformative or even catastrophically risky AI are very debatable, and I have tried to focus my work on proposals that make sense even for people who disagree with me on the below points. But my own views are that:
There’s a serious (>10%) risk that we’ll see transformative AI2 within a few years.
In that case it’s not realistic to have sufficient protective measures for the risks in time.
Sufficient protective measures would require huge advances on a number of fronts, including information security that could take years to build up and alignment science breakthroughs that we can’t put a timeline on given the nascent state of the field, so even decades might or might not be enough time to prepare, even given a lot of effort.
If it were all up to me, the world would pause now—but it isn’t, and I’m more uncertain about whether a “partial pause” is good
In a hypothetical world where everyone shared my views about AI risks, there would (after deliberation and soul-searching, and only if these didn’t change my current views) be a global regulation-backed pause on all investment in and work on (a) general3 enhancement of AI capabilities beyond the current state of the art, including by scaling up large language models; (b) building more of the hardware (or parts of the pipeline most useful for more hardware) most useful for large-scale training runs (e.g., H100’s); (c) algorithmic innovations that could significantly contribute to (a).
The pause would end when it was clear how to progress some amount further with negligible catastrophic risk and reinstitute the pause before going beyond negligible catastrophic risks. (This means another pause might occur shortly afterward. Overall, I think it’s plausible that the right amount of time to be either paused or in a sequence of small scaleups followed by pauses could be decades or more, though this depends on a lot of things.) This would require a strong, science-backed understanding of AI advances such that we could be assured of quickly detecting early warning signs of any catastrophic-risk-posing AI capabilities we didn’t have sufficient protective measures for.
I didn’t have this view a few years ago. Why now?
I think today’s state-of-the-art AIs are already in the zone where (a) we can already learn a huge amount (about AI alignment and other things) by studying them; (b) it’s hard to rule out that a modest scaleup from here—or an improvement in “post-training enhancements” (advances that make it possible to do more with an existing AI than before, without having to do a new expensive training run)4 - could lead to models that pose catastrophic risks.
I think we’re pretty far from being ready even for early versions of catastrophic-risk-posing models (for example, I think information security is not where it needs to be, and this won’t be a quick fix).
If a model’s weights were stolen and became widely available, it would be hard to rule out that model becoming more dangerous later via post-training enhancements. So even training slightly bigger models than today’s state of the art seems to add nontrivially to the risks.
All of that said, I think that advocating for a pause now might lead instead to a “partial pause” such as:
Regulation-mandated pauses in some countries and not others, with many researchers going elsewhere to work on AI scaling.
Temporary bans on large training runs, but not on post-training improvements or algorithmic improvements or expansion of hardware capacity. In this case, an “unpause”—including via new scaling methods that didn’t technically fall under the purview of the regulatory ban, or via superficially attractive but insufficient protective measures, or via a sense that the pause advocates had “cried wolf”—might lead to extraordinarily fast progress, much faster than the default and with a more intense international race.
Regulation with poor enough design and/or enough loopholes as to create a substantial “honor system” dynamic, which might mean that people more concerned about risks become totally uninvolved in AI development while people less concerned about risks race ahead. This in turn could mean a still-worse ratio of progress on AI capabilities to progress on protective measures.
No regulation or totally mis-aimed regulation (e.g., restrictions on deploying large language models but not on training them), accompanied by the same dynamic from the previous bullet point.
It’s much harder for me to say whether these various forms of “partial pause” would be good.
To pick a couple of relatively simple imaginable outcomes and how I’d feel about them:
If there were a US-legislated moratorium on training runs exceeding a compute threshold in line with today’s state-of-the-art models, with the implicit intention of doing so until there was a convincing and science-backed way of bounding the risks—with broad but not necessarily overwhelming support from the general public—I’d consider this to be probably a good thing. I’d think this even if the ban (a) didn’t yet come with signs of progress on international enforcement; (b) started with only relatively weak domestic enforcement; and (c) didn’t include any measures to slow production of hardware, advances in algorithmic efficiency or post-training enhancements. In this case I would be hopeful about progress on (a) and (b), as well as on protective measures generally, because of the strong signal this moratorium would send internationally about the seriousness of the threat and the urgency of developing a better understanding of the risks, and of making progress on protective measures. I have very low confidence in my take here and could imagine changing my mind easily.
If a scaling pause were implemented using executive orders that were likely to be overturned next time the party in power changed, with spotty enforcement and no effects on hardware and algorithmic progress, I’d consider this pause a bad thing. This is also a guess that I’m not confident in.
Overall I don’t have settled views on whether it’d be good for me to prioritize advocating for any particular policy.5 At the same time, if it turns out that there is (or will be) a lot more agreement with my current views than there currently seems to be, I wouldn’t want to be even a small obstacle to big things happening, and there’s a risk that my lack of active advocacy could be confused with opposition to outcomes I actually support.
I feel generally uncertain about how to navigate this situation. For now I am just trying to spell out my views and make it less likely that I’ll get confused for supporting or opposing something I don’t.
Responsible scaling policies (RSPs) seem like a robustly good compromise with people who have different views from mine (with some risks that I think can be managed)
My sense is that people have views all over the map about AI risk, such that it would be hard to build a big coalition around the kind of pause I’d support most.
Some people think that the kinds of risks I’m worried about are far off, farfetched or ridiculous.
Some people think such risks might be real and soon, but that we’ll make enough progress on security, alignment, etc. to handle the risks—and indeed, that further scaling is an important enabler of this progress (e.g., a lot of alignment research will work better with more advanced systems).
Some people think the risks are real and soon, but might be relatively small, and that it’s therefore more important to focus on things like the U.S. staying ahead of other countries on AI progress.
I’m excited about RSPs partly because it seems like people in those categories—not just people who agree with my estimates about risks—should support RSPs. This raises the possibility of a much broader consensus around conditional pausing than I think is likely around immediate (unconditional) pausing. And with a broader consensus, I expect an easier time getting well-designed, well-enforced regulation.
I think RSPs represent an opportunity for wide consensus that pausing under certain conditions would be good, and this seems like it would be an extremely valuable thing to establish.
Importantly, agreeing that certain conditions would justify a pause is not the same as agreeing that they’re the only such conditions. I think agreeing that a pause needs to be prepared for at all seems like the most valuable step, and revising pause conditions can be done from there.
Another reason I am excited about RSPs: I think optimally risk-reducing regulation would be very hard to get right. (Even the hypothetical, global-agreement-backed pause I describe above would be hugely challenging to design in detail.) When I think something is hard to design, my first instinct is to hope for someone to take a first stab at it (or at least at some parts of it), learn what they can about the shortcomings, and iterate. RSPs present an opportunity to do something along these lines, and that seems much better than focusing all efforts and hopes on regulation that might take a very long time to come.
There is a risk that RSPs will be seen as a measure that is sufficient to contain risks by itself—e.g., that governments may refrain from regulation, or simply enshrine RSPs into regulation, rather than taking more ambitious measures. Some thoughts on this:
I think it’s good for proponents of RSPs to be open about the sorts of topics I’ve written about above, so they don’t get confused with e.g. proposing RSPs as a superior alternative to regulation. This post attempts to do that on my part. And to be explicit: I think regulation will be necessary to contain AI risks (RSPs alone are not enough), and should almost certainly end up stricter than what companies impose on themselves.
In a world where there’s significant political support for regulations well beyond what companies support, I expect that any industry-backed setup will be seen as a minimum for regulation. In a world where there isn’t such political support, I think it would be a major benefit for industry standards to include conditional pauses. So overall, the risk seems relatively low and worth it here.
I think it’d be unfortunate to try to manage the above risk by resisting attempts to build consensus around conditional pauses, if one does in fact think conditional pauses are better than the status quo. Actively fighting improvements on the status quo because they might be confused for sufficient progress feels icky to me in a way that’s hard to articulate.
Footnotes
-
The other notable exception I’d make here is biology advances that could facilitate advanced bioweapons, again because of how rapid and radical the destruction potential is. I default to optimism and support for scientific and technological progress outside of these two cases. ↩
-
I like this discussion of why improvements on pretty narrow axes for today’s AI systems could lead quickly to broadly capable transformative AI. ↩
-
People would still be working on making AI better at various specific things (for example, resisting attempts to jailbreak harmlessness training, or just narrow applications like search and whatnot). It’s hard to draw a bright line here, and I don’t think it could be done perfectly using policy, but in the “if everyone shared my views” construction everyone would be making at least a big effort to avoid finding major breakthroughs that were useful for general enhancement of very broad and hard-to-bound suites of AI capabilities. ↩
-
Examples include improved fine-tuning methods and datasets, new plugins and tools for existing models, new elicitation methods in the general tradition of chain-of-thought reasoning, etc. ↩
-
I do think that at least someone should be trying it. There’s a lot to be learned from doing this—e.g., about how feasible it is to mobilize the general public—and this could inform expectations about what kinds of “partial victories” are likely. ↩
- 2023: highlights from the year, from the EA Newsletter by 5 Jan 2024 21:57 UTC; 68 points) (
- Survey on the acceleration risks of our new RFPs to study LLM capabilities by 10 Nov 2023 23:59 UTC; 38 points) (
- Survey on the acceleration risks of our new RFPs to study LLM capabilities by 10 Nov 2023 23:59 UTC; 27 points) (LessWrong;
- 1 May 2024 19:33 UTC; 6 points) 's comment on Joining the Carnegie Endowment for International Peace by (
- 31 Oct 2023 16:55 UTC; 4 points) 's comment on Thoughts on responsible scaling policies and regulation by (
Holden—these are reasonable points. But I have two quibbles.
First, the recent surveys of the general public’s attitudes towards AI risk suggest that a strongly enforced global pause would actually get quite a bit of support. It’s not outside the public’s Overton Window. It might be considered an ‘extreme solution’ by AI industry insiders and e/acc cultists. But the public seems to understand that it’s just fundamentally dangerous to invent Artificial General Intelligence that’s as smart as smart humans (and much, much faster), or to invent Artificial Superintelligence. AI experts might patronize the public by claiming they’re just reacting to sensationalized Hollywood depictions of AI risk. But I don’t care. If the public understands the potential risks, through whatever media they’ve been exposed to, and if it leads them to support a pause, we might as well capitalize on public sentiment.
Second, I worry that EAs generally have a ‘policy fetish’, in assuming that the only way to slow down a technological field is through formal, government-sanctioned regulation and ‘good policy’ solutions. I think this is incorrect, both historically and logically. In this piece on moral stigmatization of AI, I argued that an informal, grass-roots, public moral backlash against the AI industry could accomplish almost everything formal regulation can accomplish, without many of the loopholes and downsides that regulation would face. If the general public realizes that AGI-directed research is just fundamentally stupid and reckless and a huge extinction risk, they can stigmatize AI researchers, funders, suppliers, etc in ways that shut down the industry—potentially for decades. If that public stigmatization goes global, the AI industry globally could be put on ‘pause’ for quite a while. Sure, we might delay some potential benefits from some narrow AI applications. But that’s a tradeoff most reasonable people would be willing to accept. (For example, if my generation misses out on AI-created longevity treatments, and we die, but our kids survive, without facing AGI-imposed extinction risks, that’s fine with me—and I think it would be OK with most parents.)
I understand that harnessing the power of moral stigmatization to shut down a promising-but-dangerous technology like AI isn’t the usual EA style, but at this point, it might be the only practical solution to pausing dangerous AI development.
Fully agree. A potential taboo on AGI is something that is far too often overlooked by people who worry about pauses not working well (e.g. see also Scott Alexander, Matthew Barnett, Nora Belrose).
This is true—it’s the same tactic anti-GMO lobbies, the NRA, NIMBYs, and anti-vaxxers have used. The public as a whole doesn’t need to be anti-AI, even a vocal minority will be enough to swing elections and ensure an unfavorable regulatory environment. If I had to guess, AI would end up like nuclear fission—not worth the hassle, but with no off-ramp, no way to unring the alarm bell.
I think the public might support a pause on scaling, but I’m much more skeptical about the sort of hardware-inclusive pause that Holden discusses here:
A hardware-inclusive pause which is sufficient for pausing for >10 years would probably effectively dismantle companies like nvidia and would be at least a serious dent in TSMC. This would involve huge job loss and a large hit to the stock market. I expect people would not support such a pause which effectively requires dismantling a powerful industry.
It’s possible I’m overestimating the extent to which hardware needs to be stopped for such a ban to be robust and an improvement on the status quo.
I’m not an expert but economic damage seems to me plausibly like a question of implementation details. E.g. if you ask for a stop in hardware improvements at the same time as implementing hardware-level compute monitoring, this likely requires development of new technology to do efficiently which may allow the current companies to maintain their leading position.
Of course, restrictions are going to have some effect, and plausibly may hit Nvidia’s valuation but it is not at all clear that the economic consequences would necessarily be dramatic (the situation of the car industry and switching to E.V.’s might be vaguely analogous).
I think the tech companies—and in particular the AGI companies—are already too powerful for such an informal public backlash to slow them down significantly.
Disagree. Almost every successful moral campaign in history started out as an informal public backlash against some evil or danger.
The AGI companies involve a few thousand people versus 8 billion, a few tens of billions of funding versus 360 trillion total global assets, and about 3 key nation-states (US, UK, China) versus 195 nation-states in the world.
Compared to actually powerful industries, AGI companies are very small potatoes. Very few people would miss them if they were set on ‘pause’.
I hope you are right.
I imagine it going hand in hand with more formal backlashes (i.e. regulation, law, treaties).
Strong agree. I wish ARC and Anthropic had been more clear about this, and I would be less critical of their RSP posts if they were upfront and clear about this stance. I think your post is strong and clear (you state multiple times, unambiguously, that you think regulation is necessary and that you wish the world had more political will to regulate). I appreciate this, and I’m glad you wrote this post.
A few thoughts:
One reason I’m critical of the Anthropic RSP is that it does not make it clear under what conditions it would actually pause, or for how long, or under what safeguards it would determine it’s OK to keep going. It is nice that they said they would run some evals at least once every 4X in effective compute and that they don’t want to train catastrophe-capable models until their infosec makes it more expensive for actors to steal their models. It is nice that they said that once they get systems that are capable of producing biological weapons, they will at least write something up about what to do with AGI before they decide to just go ahead and scale to AGI. But I mostly look at the RSP and say “wow, these are some of the most bare minimum commitments I could’ve expected, and they don’t even really tell me what a pause would look like and how they would end it.”
Meanwhile, we have OpenAI (that plans to release an RSP at some point), DeepMind (rumor has it they’re working on one but also that it might be very hard to get Google to endorse one), and Meta (oof). So I guess I’m sort of left thinking something like “If Anthropic’s RSP is the best RSP we’re going to get, then yikes, this RSP plan is not doing so well.” Of course, this is just a first version, but the substance of the RSP and the way it was communicated about doesn’t inspire much hope in me that future versions will be better.
I think the RSP frame is wrong, and I don’t want regulators to use it as a building block. My understanding is that labs are refusing to adopt an evals regime in which the burden of proof is on labs to show that scaling is safe. Given this lack of buy-in, the RSP folks concluded that the only thing left to do was to say “OK, fine, but at least please check to see if the system will imminently kill you. And if we find proof that the system is pretty clearly dangerous or about to be dangerous, then will you at least consider stopping” It seems plausible to me that governments would be willing to start with something stricter and more sensible than this “just keep going until we can prove that the model has highly dangerous capabilities” regime.
I think some improvements on the status quo can be net negative because they either (a) cement in an incorrect frame or (b) take a limited window of political will/attention and steer it toward something weaker than what would’ve happened if people had pushed for something stronger. For example, I think the UK government is currently looking around for substantive stuff to show their constituents (and themselves) that they are doing something serious about AI. If companies give them a milktoast solution that allows them to say “look, we did the responsible thing!”, it seems quite plausible to me that we actually end up in a worse world than if the AIS community had rallied behind something stronger.
If everyone communicating about RSPs was clear that they don’t want it to be seen as sufficient, that would be great. In practice, that’s not what I see happening. Anthropic’s RSP largely seems devoted to signaling that Anthropic is great, safe, credible, and trustworthy. Paul’s recent post is nuanced, but I don’t think the “RSPs are not sufficient” frame was sufficiently emphasized (perhaps partly because he thinks RSPs could lead to a 10x reduction in risk, which seems crazy to me, and if he goes around saying that to policymakers, I expect them to hear something like “this is a good plan that would sufficiently reduce risks”). ARC’s post tries to sell RSPs as a pragmatic middle ground and IMO pretty clearly does not emphasize (or even mention?) some sort of “these are not sufficient” message. Finally, the name itself sounds like it came out of a propaganda department– “hey, governments, look, we can scale responsibly”.
At minimum, I hope that RSPs get renamed, and that those communicating about RSPs are more careful to avoid giving off the impression that RSPs are sufficient.
More ambitiously, I hope that folks working on RSPs seriously consider whether or not this is the best thing to be working on or advocating for. My impression is that this plan made more sense when it was less clear that the Overton Window was going to blow open, Bengio/Hinton would enter the fray, journalists and the public would be fairly sympathetic, Rishi Sunak would host an xrisk summit, Blumenthal would run hearings about xrisk, etc. I think everyone working on RSPs should spend at least a few hours taking seriously the possibility that the AIS community could be advocating for stronger policy proposals and getting out of the “we can’t do anything until we literally have proof that the model is imminently dangerous” frame. To be clear, I think some people who do this reflection will conclude that they ought to keep making marginal progress on RSPs. I would be surprised if the current allocation of community talent/resources was correct, though, and I think on the margin more people should be doing things like CAIP & Conjecture, and fewer people should be doing things like RSPs. (Note that CAIP & Conjecture both impt flaws/limitations– and I think this partly has to do with the fact that so much top community talent has been funneled into RSPs/labs relative to advocacy/outreach/outside game).
Cross-posted from LessWrong.
It’s hard to take anything else you’re saying seriously when you say things like this; it seems clear that you just haven’t read Anthropic’s RSP. I think that the current conditions and resulting safeguards are insufficient to prevent AI existential risk, but to say that it doesn’t make them clear is just patently false.
The conditions under which Anthropic commits to pausing in the RSP are very clear. In big bold font on the second page it says:
And then it lays out a serious of safety procedures that Anthropic commits to meeting for ASL-3 models or else pausing, with some of the most serious commitments here being:
And a clear evaluation-based definition of ASL-3:
This is the basic substance of the RSP; I don’t understand how you could have possibly read it and missed this. I don’t want to be mean, but I am really disappointed in these sort of exceedingly lazy takes.
It think calling a take “lazy”, which could indeed be considered “mean” is not avery helpful approach, you could have made your point without that kind of derision. There are going to be a lot of misunderstandings and hot takes around RSPs, and I think AI company employees especially should err heavily on the side of patience and kind understanding it they want to avoid people becoming more adversarial towards them.
Live by the sword, die by the sword.
Akash said...
“that it does not make it clear under what conditions it would actually pause, or for how long, or under what safeguards it would determine it’s OK to keep going. It”
I agree the conditions from the RSP you started are clearer than I would have expected reading Akash’s above comment, but to be fair to Akash, from those paragraphs you posted above, only the last one seems to state a clear and specific condition for pausing, the others seem to say “refer to experts” which could be considered unclear, to give Akash the benefit of the doubt.
And they don’t say how long the pause would be out conditions for restarting either.
You have a huge amount of clout in determining where $100Ms of OpenPhil money is directed toward AI x-safety. I think you should be much more vocal on this—at least indirectly by OpenPhil grant making. In fact I’ve been surprised at how quiet you (and OpenPhil) have been since GPT-4 was released!
Reading the first half of this post, I feel that your views are actually very close to my own. It leaves me wondering how much your conflicts of interest -
- are factoring into why you come down in favour of RSPs (above pausing now) in the end.
I’m guessing stopping scaling by US POTUS executive order is not even legally possible though? So I don’t think we’d have to worry about that.
Legal or constitutional infeasibility does not always prevent executive orders from being applied (or followed). I feel like the US president declaring a state of emergency related to AI catastrophic risk (and then forcing large AI companies to stop training large models) sounds at least as constitutionally viable as the attempted executive order for student loan forgiveness.
I agree that this seems fairly unlikely to happen in practice though.
I think you put it well when you said:
“Some people think that the kinds of risks I’m worried about are far off, farfetched or ridiculous.”
If I made the claim that we had 12 months before all of humanity is wiped by an asteroid, you’d rightly ask me for evidence. Have I picked up a distant rock in space using radio telescopes? Some other tangible proof? Or is it a best-guess, since, hey, it’s technically possible that we could be hit with an asteroid on any given year. Then imagine if I advocate we spend two percent of global GDP preparing for this event.
That’s where the state of AGI fear is—all scenarios depend on wild leaps of faith and successive assumptions that build on each other.
I’ve attempted to put this all in one place with this post.
Unfortunately, I beleve that any pause that comes about might be publicly acknowledged but there are certain interests that would be far too happy to drive any further development underground. The potential for shadow R and D would only cause others to also continue, leading to a situation whereby the likelihood of ANY hope of regulation would disappear completely. I think the threat is real and already an existential one. AI is “in the system” already and developing itself into something we cannot possibly imagine.
Don’t forget the Google or Microsoft experiment decades ago when 2 AI were talking to each other and created a language that the onlookers couldn’t understand...so they switched it off. If you study the words that were initially being used by the AI algorithm it can be seen as trying to identify itself between subject, object and verb. In short, AI was even then developing self awareness.
We are decades behind the curve here.
I’m not sure which is the better place to have this discussion, so I’m trying both. Copied from my comment on Less Wrong:
That all makes sense. To expand a little more on some of the logic:
It seems like the outcome of a partial pause rests in part on whether that would tend to put people in the lead of the AGI race who are more or less safety-concerned.
I think it’s nontrivial that we currently have three teams in the lead who all appear to honestly take the risks very seriously, and changing that might be a very bad idea.
On the other hand, the argument for alignment risks is quite strong, and we might expect more people to take the risks more seriously as those arguments diffuse. This might not happen if polarization becomes a large factor in beliefs on AGI risk. The evidence for climate change was also pretty strong, but we saw half of America believe in it less, not more, as evidence mounted. The lines of polarization would be different in this case, but I’m afraid it could happen. I outlined that case a little in AI scares and changing public beliefs
In that case, I think a partial pause would have a negative expected value, as the current lead decayed, and more people who believe in risks less get into the lead by circumventing the pause.
This makes me highly unsure if a pause would be net-positive. Having alignment solutions won’t help if they’re not implemented because the taxes are too high.
The creation of compute overhang is another reason to worry about a pause. It’s highly uncertain how far we are from making adequate compute for AGI affordable to individuals. Algorithms and compute will keep getting better during a pause. So will theory of AGI, along with theory of alignment.
This puts me, and I think the alignment community at large, in a very uncomfortable position of not knowing whether a realistic pause would be helpful.
It does seem clear that creating mechanisms and political will for a pause are a good idea.
Advocating for more safety work also seems clear cut.
To this end, I think it’s true that you create more political capitol by successfully pushing for policy.
A pause now would create even more capitol, but it’s also less likely to be a win, and it could wind up creating polarization and so costing rather than creating capitol. It’s harder to argue for a pause now when even most alignment folks think we’re years from AGI.
So perhaps the low-hanging fruit is pushing for voluntary RSPs, and government funding for safety work. These are clear improvements, and likely to be wins that create capitol for a pause as we get closer to AGI.
There’s a lot of uncertainty here, and that’s uncomfortable. More discussion like this should help resolve that uncertainty, and thereby help clarify and unify the collective will of the safety community.
Seth—you mentioned that ‘we currently have three teams in the lead who all appear to honestly take the risks very seriously, and changing that might be a very bad idea.’
I assume you’re referring to OpenAI, DeepMind, and Anthropic.
Yes, they all give lip service to AI safety, and they hire safety researchers, and they safety-wash their capabilities development.
But I see no evidence that they would actually stop their AGI development under any circumstances, no matter how risky it started to seem.
Maybe you trust their leadership. I do not. And I don’t think the 8 billion people in the world should have their fates left in the hands of a tiny set of AI industry leaders—no matter how benevolent they seem, or how many times they talk about AI safety in interviews.
I agree that those teams aren’t completely trustworthy, and in an ideal world, we should be making this decision by including everyone on earth. But with a partial pause, do you expect to have better or worse teams in the lead for achieving AGI? That was my point.
Well from an AI safety viewpoint, the very worst teams to be leading the AGI rush would be those that (1) are very competent, well-funded, well-run, and full of idealistic talent, and (2) don’t actually care about reducing extinction risk—however much lip service they pay to AI safety.
From that perspective, OpenAI is the worst team, and they’re in the lead.
I think that’s quite a pessimistic take. I take Altman seriously on caring about x-risk, although I’m not sure he takes it quite seriously enough. This is based on public comments to that effect around 2013, before he started running OpenAI. And Sutskever definitely seems properly concerned.