I’d imagine there are several reasons this question hasn’t received as much attention as AGI Safety, but the main reasons are that it’s both much lower impact and (arguably) much less tractable. It’s lower impact because, as you said, it’s not an existential risk. It’s less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).
I’d imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I’d imagine it would decrease terrorism generally, but I could be wrong.
Regarding the US in particular, I’m personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media—companies tend to be much more focussed on profits than on pushing ideologies.
I’d imagine there are several reasons this question hasn’t received as much attention as AGI Safety, but the main reasons are that it’s both much lower impact and (arguably) much less tractable. It’s lower impact because, as you said, it’s not an existential risk. It’s less tractable because even if we could figure out a technical solution, there are strong vested interests against applying the solution (as contrasted to AGI Safety, where all vested interests would want the AI to be aligned).
I’d imagine this sort of tech would actually decrease the risk from bioweapons etc for the same reason that I’d imagine it would decrease terrorism generally, but I could be wrong.
Regarding the US in particular, I’m personally much less worried about the corporations pushing their preferred ideologies than them using the tech to manipulate us into buying stuff and watching their media—companies tend to be much more focussed on profits than on pushing ideologies.