Yeah, Dario pretty explicitly describes liking RSPs in part because they minimally constrain continued scaling:
“I mean one way to think about it is like the responsible scaling plan doesn’t slow you down except where it’s absolutely necessary. It only slows you down where it’s like there’s a critical danger in this specific place, with this specific type of model, therefore you need to slow down.” (Logan Bartlett interview, h/t Joe_Collman).
This makes sense, but if anything the conflict of interest seems more alarming if you’re influencing national policy. For example, I would guess that you are one of the people—maybe literally among the top 10?—who stands to personally lose the most money in the event of an AI pause. Are you worried about this, or taking any actions to mitigate it (e.g., trying to convert equity into cash?)