Thanks for linking Dario’s testimony. I actually found this extract which was closer to answering my question:
I wanted to answer one obvious question up front: if I truly believe that AI’s risks are so severe, why even develop the technology at all? To this I have three answers:
First, if we can mitigate the risks of AI, its benefits will be truly profound. In the next few years it could greatly accelerate treatments for diseases such as cancer, lower the cost of energy, revolutionize education, improve efficiency throughout government, and much more.
Second, relinquishing this technology in the United States would simply hand over its power, risks, and moral dilemmas to adversaries who do not share our values.
Finally, a consistent theme of our research has been that the best mitigations to the risks of powerful AI often also involve powerful AI. In other words, the danger and the solution to the danger are often coupled. Being at the frontier thus puts us in a strong position to develop safety techniques (like those I’ve mentioned above), and also to see ahead and warn about risks, as I’m doing today.
I know this statement would have been massively pre-prepared for the hearing, but I don’t feel super convinced by it:
On his point 1) such benefits have to be weighed up against the harms, both existential and not. But just as many parts of the xRisk story are speculative, so are many of the purported benefits from AI research. I guess Dario is saying ‘it could’ and not it will, but for me if you want to “improve efficiency throughout government” you’ll need political solutions, not technical ones.
Point 2) is the ‘but China’ response to AI Safety. I’m not an expert in US foreign policy strategy (funny how everyone is these days), but I’d note this response only works if you view the path to increasing capability as straightforward. It also doesn’t work, in my mind, if you think there’s a high chance of xRisk. Just because someone else might ignite the atmosphere, doesn’t mean you should too. I’d also note that Dario doesn’t sound nearly as confident making this statement as he did talking to it with Dwarkesh recently.
Point 3) makes sense if you think the value of the benefits massively outweighs the harms, so that you solve the harms as you reap the benefits. But if those harms outweigh the benefits, or you incure a substantial “risk of ruin”, then being at the frontier and expanding it further unilaterally makes less sense to me.
I guess I’d want the CEOs and those with power in these companies to actually be put under the scrutiny in the political sphere which they deserve. These are important and consequential issues we’re talking about, and I just get the vibe that the ‘kid gloves’ need to come off a bit in turns of oversight and scrutiny/scepticism.
Thanks for linking Dario’s testimony. I actually found this extract which was closer to answering my question:
I know this statement would have been massively pre-prepared for the hearing, but I don’t feel super convinced by it:
On his point 1) such benefits have to be weighed up against the harms, both existential and not. But just as many parts of the xRisk story are speculative, so are many of the purported benefits from AI research. I guess Dario is saying ‘it could’ and not it will, but for me if you want to “improve efficiency throughout government” you’ll need political solutions, not technical ones.
Point 2) is the ‘but China’ response to AI Safety. I’m not an expert in US foreign policy strategy (funny how everyone is these days), but I’d note this response only works if you view the path to increasing capability as straightforward. It also doesn’t work, in my mind, if you think there’s a high chance of xRisk. Just because someone else might ignite the atmosphere, doesn’t mean you should too. I’d also note that Dario doesn’t sound nearly as confident making this statement as he did talking to it with Dwarkesh recently.
Point 3) makes sense if you think the value of the benefits massively outweighs the harms, so that you solve the harms as you reap the benefits. But if those harms outweigh the benefits, or you incure a substantial “risk of ruin”, then being at the frontier and expanding it further unilaterally makes less sense to me.
I guess I’d want the CEOs and those with power in these companies to actually be put under the scrutiny in the political sphere which they deserve. These are important and consequential issues we’re talking about, and I just get the vibe that the ‘kid gloves’ need to come off a bit in turns of oversight and scrutiny/scepticism.
Yeah, I think the real reason is we think we’re safer than OpenAI (and possibly some wanting-power but that mostly doesn’t explain their behavior).