“The primary theory is that alien civilizations could continuously broadcast a highly optimized message intended to hijack or destroy any other civilizations unlucky enough to tune in.”
One question I have is whether this is possible and how difficult it is?
I mean if I took a professional programmer and told them to write a message which would hijack another computer when sent to it isn’t that extremely hard to do? I mean even if you already know about human programming you have no idea what system that machine is running or how it is structured.
Isn’t then that problem like 10,000x harder when dealing with a totally different species and their computer equipment? The sender of the message would have no idea what computational structures they were using and absolutely no idea of the syntax or coding conventions? I mean even using binary rather than trinary or decimal is a choice?
One question I have is whether this is possible and how difficult it is?
I think it would be very difficult without human assistance. I don’t, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).
We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note “This AI will solve all your problems: poverty, disease, world hunger etc.”. We can’t verify that the AI will actually do any of those things, but enough people think that the aliens aren’t lying that we decide to try it.
After running the AI, it immediately begins its plans for world domination. Soon afterwards, humanity is extinct; and in our place, an alien AI begins constructing a world more favorable to alien values than our own.
Yeah I think that’s a good point. I mean I could see how you could send a civilisation the blueprints for atomic weapons and hope that they wipe themselves out or something, that would be very feasible.
I guess I’m a bit more skeptical when it comes to AI. I mean it’s hard to get code to run and it has to be tailored to the hardware. And if you were going to teach them enough information to build advanced AIs I think there’d be a lot of uncertainty about what they’d end up making, I mean there’d be bugs in the code for sure.
It’s an interesting argument though and I can really see your perspective on it.
Nice writeup.
“The primary theory is that alien civilizations could continuously broadcast a highly optimized message intended to hijack or destroy any other civilizations unlucky enough to tune in.”
One question I have is whether this is possible and how difficult it is?
I mean if I took a professional programmer and told them to write a message which would hijack another computer when sent to it isn’t that extremely hard to do? I mean even if you already know about human programming you have no idea what system that machine is running or how it is structured.
Isn’t then that problem like 10,000x harder when dealing with a totally different species and their computer equipment? The sender of the message would have no idea what computational structures they were using and absolutely no idea of the syntax or coding conventions? I mean even using binary rather than trinary or decimal is a choice?
I think it would be very difficult without human assistance. I don’t, for example, think that aliens could hijack the computer hardware we use to process potential signals (though, it would perhaps be wise not to underestimate billion-year-old aliens).
We can imagine the following alternative strategy of attack. Suppose the aliens sent us the code to an AI with the note “This AI will solve all your problems: poverty, disease, world hunger etc.”. We can’t verify that the AI will actually do any of those things, but enough people think that the aliens aren’t lying that we decide to try it.
After running the AI, it immediately begins its plans for world domination. Soon afterwards, humanity is extinct; and in our place, an alien AI begins constructing a world more favorable to alien values than our own.
Yeah I think that’s a good point. I mean I could see how you could send a civilisation the blueprints for atomic weapons and hope that they wipe themselves out or something, that would be very feasible.
I guess I’m a bit more skeptical when it comes to AI. I mean it’s hard to get code to run and it has to be tailored to the hardware. And if you were going to teach them enough information to build advanced AIs I think there’d be a lot of uncertainty about what they’d end up making, I mean there’d be bugs in the code for sure.
It’s an interesting argument though and I can really see your perspective on it.