Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I kept responding in private conversations on Paul’s arguments, to a point that I decided to share my comments here.
The hardware overhang argument has poor grounding.
Labs scaling models results in more investment in producing more GPU chips with more flops (see Sam Altman’s play for the UAE chip factory) and less latency between (see the EA start-up Fathom Radiant, which started up offering fibre-optic-connected supercomputers for OpenAI and now probably shifted to Anthropic).
The increasing levels of model combinatorial complexity and outside signal connectivity become exponentially harder to keep safe. So the only viable pathway is not scaling that further, rather than “helplessly” take all the hardware that currently gets produced.
Further, AI Impacts found no historical analogues for a hardware overhang. And plenty of common sense reasons why the argument’s premises are unsound.
The hardware overhang claim lacks grounding, but that hasn’t prevented alignment researchers from repeating it in a way that ends up weakening coordination efforts to restrict AI corporations.
Responsible scaling policies have ‘safety-washing’ spelled all over them.
Consider the original formulation by Anthropic: “Our RSP focuses on catastrophic risks – those where an AI model directly causes large scale devastation.”
In other words: our company can scale on as long as our staff/trustees do not deem the risk of a new AI model directly causing a catastrophe as sufficiently high.
Is that responsible?
It’s assuming that further scaling can be risk managed. It’s assuming that just risk management protocols are enough.
Then, the company invents a new wonky risk management framework, ignoring established and more comprehensive practices.
Paul argues that this could be the basis for effective regulation. But Anthropic et al. lobbying national governments to enforce the use of that wonky risk management framework makes things worse.
It distracts from policy efforts to prevent the increasing harms. It creates a perception of safety (instead of actually ensuring safety).
Ideal for AI corporations to keep scaling and circumvent being held accountable.
RSPs support regulatory capture. I want us to become clear about what we are dealing with.
Paul—you wrote that ‘If the world were unified around the priority of minimizing global catastrophic risk, I think that we could reduce risk significantly further by implementing a global, long-lasting, and effectively enforced pause on frontier AI development—including a moratorium on the development and production of some types of computing hardware. The world is not unified around this goal....’
I think that underestimates the current public consensus and concerns about AI risk. The polls I’ve seen suggest widespread public hostility to AGI development, and skepticism about the AI industry’s capacity to manage AI development safely. Indeed, the public sentiment seems much closer to that of AI Safety experts (eg within EA), than it does to the views of AI industry insiders (such as Yann LeCun), or to e/acc people who yearn for ‘the Singularity’.
I’m still digesting the implications of these opinion polls, but I think they should nudge EAs towards a fairly significant updating on our expectations about the role that the public could play in supporting an AI Pause. It’s worth remembering that the public has seen depictions of dangerous AI in novels, movies, and TV series ever since the 1927 movie ‘Metropolis″ (or, arguably, maybe even since the 1818 novel ‘Frankenstein’). Ordinary folks are primed to understand that AI is very risky. They might not understand the details of technical AI alignment, or RSPs, or LLMs, or deep learning. But the political will seems to be there to support an AI Pause.
My worry is that we EAs have spent so many years assuming that the public can’t understand AI risks, that we’re still pushing ahead on technical and policy solutions, because that’s what we’re used to doing. And we assume the political will isn’t there to do anything more significant and binding in reducing X risk. But perhaps the public will really is there.
Executive summary: The post discusses the importance of responsible scaling policies (RSPs) in AI development, their relationship with regulation, and their potential to reduce risks associated with powerful AI systems.
Key points:
Responsible scaling policies (RSPs) play a crucial role in mitigating the risks associated with rapid AI development, as developers may not have the expertise or controls to handle superhuman AI systems safely.
RSPs create transparency and clear conditions under which AI development should be paused, improving public understanding and debate about AI safety measures.
While RSPs are important, they are not a substitute for regulation, as voluntary commitments may lack universality and oversight.
Implementing RSPs can increase the likelihood of effective regulation and provide a path for iterative policy improvements.
The post acknowledges that even with RSPs, significant risks remain in rapid AI development, leading to the potential need for a global, hardware-inclusive pause in AI development.
The author suggests that RSPs, like Anthropic’s, can reduce risk and create a framework for measuring and improving AI safety but still need refinement and audits.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I’m curating this post. There have been several recent posts on the theme of RSPs. I’m featuring this one, but I recommend the other two posts to readers.
I particularly like that these posts mention that they view these policies as good for eventual regulation, and are willing to be clear about this.
No one under 160IQ understands me and i can prove that, i saved our 1000 pages convos, no one takes me seriously, but 160IQs say i am extrmelly intelligent...
Don’t think anyone is strong enough to see this, as evolution hides truths from us that would make us less fit for survival and we don’t need to know them...
Everything is will-to-power and nothing besides, ppl are selfish and want only most for themselves, top 1% rich control everything and smart cities with their own municipal goverments are underway… We are just cannon fodder and fap material to them. Good ppl get only exploited or pinned crimes on them… Life is constant exploitation, appropriation and conquest… Free will is illusion. Life is so terrible it can only exist based on lies, evolution hides truths unless they coincide with survival: https://scholarcommons.scu.edu/cgi/viewcontent.cgi?article=1052&context=phi Everything is one whole consciousness/manifestation of god: we will suffer forever all painful deaths unless we change (provided nature of reality allows for it) and tortures just google WW2 Japan unit 731!!! https://www.youtube.com/watch?v=K8GVHRMKOiM Only truth we can be sure of that life is HELL!!!