Ok. I don’t put much weight on s-risks being a likely outcome. Far more likely seems to be just that the solar system (and beyond) will be arranged in some (to us) arbitrary way, and all carbon-based life will be lost as collateral damage.
Although I guess if you are looking a bit nearer term, then s-risk from misuse could be quite high. But I don’t think any of the major players (OpenAI, Deepmind, Anthropic) are even really working on trying to prevent misuse at all as part of their strategy (their core AI Alignment work is on aligning the AIs, rather than the humans using them!) So actually, this is just another reason to shut it all down.
Ok. I don’t put much weight on s-risks being a likely outcome. Far more likely seems to be just that the solar system (and beyond) will be arranged in some (to us) arbitrary way, and all carbon-based life will be lost as collateral damage.
Although I guess if you are looking a bit nearer term, then s-risk from misuse could be quite high. But I don’t think any of the major players (OpenAI, Deepmind, Anthropic) are even really working on trying to prevent misuse at all as part of their strategy (their core AI Alignment work is on aligning the AIs, rather than the humans using them!) So actually, this is just another reason to shut it all down.