One question I have is whether this is possible and how difficult it is?
I think it would be very difficult without human assistance. I don’t, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).
We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note “This AI will solve all your problems: poverty, disease, world hunger etc.”. We can’t verify that the AI will actually do any of those things, but enough people think that the aliens aren’t lying that we decide to try it.
After running the AI, it immediately begins its plans for world domination. Soon afterwards, humanity is extinct; and in our place, an alien AI begins constructing a world more favorable to alien values than our own.
Yeah I think that’s a good point. I mean I could see how you could send a civilisation the blueprints for atomic weapons and hope that they wipe themselves out or something, that would be very feasible.
I guess I’m a bit more skeptical when it comes to AI. I mean it’s hard to get code to run and it has to be tailored to the hardware. And if you were going to teach them enough information to build advanced AIs I think there’d be a lot of uncertainty about what they’d end up making, I mean there’d be bugs in the code for sure.
It’s an interesting argument though and I can really see your perspective on it.
I think it would be very difficult without human assistance. I don’t, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).
We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note “This AI will solve all your problems: poverty, disease, world hunger etc.”. We can’t verify that the AI will actually do any of those things, but enough people think that the aliens aren’t lying that we decide to try it.
After running the AI, it immediately begins its plans for world domination. Soon afterwards, humanity is extinct; and in our place, an alien AI begins constructing a world more favorable to alien values than our own.
Yeah I think that’s a good point. I mean I could see how you could send a civilisation the blueprints for atomic weapons and hope that they wipe themselves out or something, that would be very feasible.
I guess I’m a bit more skeptical when it comes to AI. I mean it’s hard to get code to run and it has to be tailored to the hardware. And if you were going to teach them enough information to build advanced AIs I think there’d be a lot of uncertainty about what they’d end up making, I mean there’d be bugs in the code for sure.
It’s an interesting argument though and I can really see your perspective on it.