Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
weird that one of their “red lines” is a moral line in the sand based on convictions in political philosophy, while the other one is a “not wrong but early” thing about reliability. I’m reading this as Dario pretty clearly saying that when AIs are reliable enough to have human-out-of-the-loop killchains, Anthropic will be happy to power it.
And I’m worried this is a nuance that not all Anthropic employees or https://notdivided.org/ signers are noticing and disagree with.
Well, at some point AI was supposed to be our Leviathan. The reason this turned so weird is that the US is now an autocracy clearly opposed to the concept of a liberal democratic world-state, which has been the unspoken goal of everything since WW2.
Fully autonomous weapons seems to me to be a clear-cut case of differential acceleration in any case: not giving any kind of legitimate battlefield advantage for law-abiding democratic countries (human reflexes are top of the sigmoid; this is one of our main evolutionarily-selected skills for obvious reasons), but allowing authoritarians to establish a military dictatorship with minimal staff (historically “the army is ultimately made up of ordinary people who can refuse to shoot their brethren and/or shoot the dictator instead” have been an important pressure valve), or to organize genocidal massacres with automated recognition of targeted civilians (i.e. the FLI Slaughterbots scenario).
weird that one of their “red lines” is a moral line in the sand based on convictions in political philosophy, while the other one is a “not wrong but early” thing about reliability. I’m reading this as Dario pretty clearly saying that when AIs are reliable enough to have human-out-of-the-loop killchains, Anthropic will be happy to power it.
And I’m worried this is a nuance that not all Anthropic employees or https://notdivided.org/ signers are noticing and disagree with.
Well, at some point AI was supposed to be our Leviathan. The reason this turned so weird is that the US is now an autocracy clearly opposed to the concept of a liberal democratic world-state, which has been the unspoken goal of everything since WW2.
Fully autonomous weapons seems to me to be a clear-cut case of differential acceleration in any case: not giving any kind of legitimate battlefield advantage for law-abiding democratic countries (human reflexes are top of the sigmoid; this is one of our main evolutionarily-selected skills for obvious reasons), but allowing authoritarians to establish a military dictatorship with minimal staff (historically “the army is ultimately made up of ordinary people who can refuse to shoot their brethren and/or shoot the dictator instead” have been an important pressure valve), or to organize genocidal massacres with automated recognition of targeted civilians (i.e. the FLI Slaughterbots scenario).