What’s up with the negative powervote? Some people. Smh. It’s an important topic.
An additional reason autonomous weapons systems based on LLMs[1] could be a very bad idea, is that LLMs are trained to (though not exclusively) get better and better at simulating the most likely continuations of context. If the AI is put in a situation and asked to play the role of “an AI that is in control of autonomous weapons”, what it ends up doing is to a large extent determined by an extrapolation of the most typical human narratives in that context.
The future of AI may literally be shaped (to a large degree) by the most representative narratives we’ve provided for entities in those roles. And the narrative behind “AI with weapons” has not usually been good.
The use of Large Language Models (LLMs) in autonomous weapons systems is a precarious notion. LLMs are designed to simulate probable continuations of context, but if they control weapons, their actions will be influenced by prevailing human narratives. Negative narratives associated with AI and weapons can have detrimental effects. To prevent this, diverse and ethical training data must be used. It is crucial to establish responsible guidelines for training AI models, particularly in the military domain. Effective altruists can contribute by conducting research and advocating for ethical considerations in developing and deploying autonomous weapons. The aim is to balance AI in the military with the protection of human life and dignity.
What’s up with the negative powervote? Some people. Smh. It’s an important topic.
An additional reason autonomous weapons systems based on LLMs[1] could be a very bad idea, is that LLMs are trained to (though not exclusively) get better and better at simulating the most likely continuations of context. If the AI is put in a situation and asked to play the role of “an AI that is in control of autonomous weapons”, what it ends up doing is to a large extent determined by an extrapolation of the most typical human narratives in that context.
The future of AI may literally be shaped (to a large degree) by the most representative narratives we’ve provided for entities in those roles. And the narrative behind “AI with weapons” has not usually been good.
Like Palantir’s AIP for Defense, which I’m guessing is based on GPT-4.
The use of Large Language Models (LLMs) in autonomous weapons systems is a precarious notion. LLMs are designed to simulate probable continuations of context, but if they control weapons, their actions will be influenced by prevailing human narratives. Negative narratives associated with AI and weapons can have detrimental effects. To prevent this, diverse and ethical training data must be used. It is crucial to establish responsible guidelines for training AI models, particularly in the military domain. Effective altruists can contribute by conducting research and advocating for ethical considerations in developing and deploying autonomous weapons. The aim is to balance AI in the military with the protection of human life and dignity.