I’d like to say I’m grateful to have read this post, it helped explicate some of my own intuitions, and has drawn my attention to a few major cruxes. I’m asking a bunch of questions because knowing what to do is hard and we should all figure it out together, not because I’m especially confident in the directions the questions point.
Should we take the contents of the section “What should people concerned about AI safety do now?” to be Alex Lintz’s ordinal ranking of worthwhile ways to spend time? If so, would you like to argue for this ranking? If not, would you like to provide an ordinal ranking and argue for it?
“The US is unlikely to be able to prevent China (or Russia) from stealing model weights anytime soon given both technological constraints and difficulty/cost of implementing sufficient security protocols.”
Do you have a link to a more detailed analysis of this? My guess is there’s precedent for the government taking security seriously in relation to some private industry and locking all of their infrastructure down, but maybe this is somewhat different for cyber (and, certainly, more porous than would be ideal, anyway). Is the real way to guarantee cybersecurity just conventional warfare? (yikes)
What’s the theory of change behind mass movement building and public-facing comms? We agitate the populace and then the populist leader does something to appease them? Or something else?
You call out the FATE people as explicitly not worth coalition-building with at this time; what about the jobs people (their more right-coded counterpart)? Historically we’ve been hesitant to ally with either group, since their models of ‘how this whole thing goes’ are sort of myopic, but that you mention FATE and not the jobs people seems significant.
“It would be nice to know if China would be capable of overtaking the US if we were to slow down progress or if we can safely model them as being a bit behind no matter how fast the US goes.”
I think compute is the crux here. Dario was recently talking about OOMs of chips matter, and the 10s of thousand necessary for current DeepSeek models would be difficult to scale to the 100s of thousands or millions that are probably necessary at some point in the chain. (Probably the line of ‘reasoning models’ descended from ~GPT-4 has worse returns per dollar spent than the line of reasoning models descended from GPT-5, esp. if the next large model is itself descended from these smaller reasoners). [<70 percent confident]
If that’s true, then compute is the mote, and export controls/compute governance still get you a lot re: avoiding multipolar scenarios (and so shouldn’t be deprioritized, as your post implies).
I’m also not sure about this ‘not appealing to the American Left’ thing. Like, there’s some subset of conservative politicians that are just going to support the thing that enriches their donors (tech billionaires), so we can’t Do The Thing* without some amount of bipartisan support, since there are bad-faith actors on both sides actively working against us.
“Developing new policy proposals which fit with the interests of Trump’s faction”
I’d like to point out that there’s a middle ground between trying to be non-partisan and convincing people in good faith of the strength of your position (while highlighting the ways in which it synergizes with their pre-existing concerns), and explicitly developing proposals that fit their interests. This latter thing absolutely screws you if the tables turn again (as they did last time when we collaborated with FATE on, i.e., the EO), and the former thing (while more difficult!) is the path to more durable (and bipartisan!) wins.
I’d like to say I’m grateful to have read this post, it helped explicate some of my own intuitions, and has drawn my attention to a few major cruxes. I’m asking a bunch of questions because knowing what to do is hard and we should all figure it out together, not because I’m especially confident in the directions the questions point.
Should we take the contents of the section “What should people concerned about AI safety do now?” to be Alex Lintz’s ordinal ranking of worthwhile ways to spend time? If so, would you like to argue for this ranking? If not, would you like to provide an ordinal ranking and argue for it?
“The US is unlikely to be able to prevent China (or Russia) from stealing model weights anytime soon given both technological constraints and difficulty/cost of implementing sufficient security protocols.”
Do you have a link to a more detailed analysis of this? My guess is there’s precedent for the government taking security seriously in relation to some private industry and locking all of their infrastructure down, but maybe this is somewhat different for cyber (and, certainly, more porous than would be ideal, anyway). Is the real way to guarantee cybersecurity just conventional warfare? (yikes)
What’s the theory of change behind mass movement building and public-facing comms? We agitate the populace and then the populist leader does something to appease them? Or something else?
You call out the FATE people as explicitly not worth coalition-building with at this time; what about the jobs people (their more right-coded counterpart)? Historically we’ve been hesitant to ally with either group, since their models of ‘how this whole thing goes’ are sort of myopic, but that you mention FATE and not the jobs people seems significant.
“It would be nice to know if China would be capable of overtaking the US if we were to slow down progress or if we can safely model them as being a bit behind no matter how fast the US goes.”
I think compute is the crux here. Dario was recently talking about OOMs of chips matter, and the 10s of thousand necessary for current DeepSeek models would be difficult to scale to the 100s of thousands or millions that are probably necessary at some point in the chain. (Probably the line of ‘reasoning models’ descended from ~GPT-4 has worse returns per dollar spent than the line of reasoning models descended from GPT-5, esp. if the next large model is itself descended from these smaller reasoners). [<70 percent confident]
If that’s true, then compute is the mote, and export controls/compute governance still get you a lot re: avoiding multipolar scenarios (and so shouldn’t be deprioritized, as your post implies).
I’m also not sure about this ‘not appealing to the American Left’ thing. Like, there’s some subset of conservative politicians that are just going to support the thing that enriches their donors (tech billionaires), so we can’t Do The Thing* without some amount of bipartisan support, since there are bad-faith actors on both sides actively working against us.
“Developing new policy proposals which fit with the interests of Trump’s faction”
I’d like to point out that there’s a middle ground between trying to be non-partisan and convincing people in good faith of the strength of your position (while highlighting the ways in which it synergizes with their pre-existing concerns), and explicitly developing proposals that fit their interests. This latter thing absolutely screws you if the tables turn again (as they did last time when we collaborated with FATE on, i.e., the EO), and the former thing (while more difficult!) is the path to more durable (and bipartisan!) wins.
*whatever that is