I think the title may be technically correct but sounds nasty.
For nitpicking side, I would argue that AI weapons problem mostly depends on their level of intelligence. If it is just narrow AI -ok. However, the greater is their intelligence, the greater is the danger and it may reach catastrophic levels before superintelligence will be created.
I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. And the first step for not creating military superintelligence - starts from non creating AI weapons.
I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence.
A superintelligence would have the ability and (probably) interest to shape the entire world. Whether it comes from the military, a corporation, or a government, it will have a compelling instrumental motivation to neutralize other superintelligences.
I think the title may be technically correct but sounds nasty.
For nitpicking side, I would argue that AI weapons problem mostly depends on their level of intelligence. If it is just narrow AI -ok. However, the greater is their intelligence, the greater is the danger and it may reach catastrophic levels before superintelligence will be created.
I would also add that superintelligence created by the military may be perfectly aligned, but still catastrophically dangerous if it is used as a universal weapon against perhaps another military superintelligence. And the first step for not creating military superintelligence - starts from non creating AI weapons.
A superintelligence would have the ability and (probably) interest to shape the entire world. Whether it comes from the military, a corporation, or a government, it will have a compelling instrumental motivation to neutralize other superintelligences.