When I hear about articles like this, I worry about journalists conflating “could be an X-risk” with “is an X-risk as substantial as any other”; journalism tends to wash out differences in scale between problems.
If you’re still in communication with the author, I’d recommend emphasizing that this risk has undergone much less study than AI alignment or biorisk and that there is no strong EA consensus against projects like SETI. It may be that more people in EA would prefer SETI to cease broadcasting than to maintain the status quo, but I haven’t heard about any particular person actively trying to make them stop/reconsider their methods. (That said, this isn’t my area of expertise and there may be persuasion underway of which I’m unaware.)
I’m mostly concerned about future articles that say something like “EAs are afraid of germs, AIs, and aliens”, with no distinction of the third item from the first two.
([This is not a serious recommendation and something I might well change my mind about if I thought about it for one more hour:] Yes, though my tentative view is that there are fairly strong, and probably decisive, irreversability/option value reasons for holding off actions like SETI until their risks and benefits are better understood. NB the case is more subtle for SETI than METI, but I think the structure is the same: once we know there are aliens there is no way back to our previous epistemic state, and it might be that knowing about aliens is an info hazard.)
I agree that information we received from aliens would likely spread widely. So in this sense I agree it would clearly be a potential info hazard.
It seems unclear to me whether the effect of such information spreading would be net good or net bad. If you see reasons why it would probably be net bad, I’d be curious to learn about them.
If such message will be a description of a computer and a program for it, it is net bad. Think about malevolent AI, which anyone able to download from stars.
Such viral message is aimed on the self-replication and thus will eventually convert Earth into its next node which use all our resources to send copies of the message farther.
Simple darwinian logic implies that such viral messages should numerically dominate between all alien messages if any exists. I wrote an article, linked below to discuss the idea in details
When I hear about articles like this, I worry about journalists conflating “could be an X-risk” with “is an X-risk as substantial as any other”; journalism tends to wash out differences in scale between problems.
If you’re still in communication with the author, I’d recommend emphasizing that this risk has undergone much less study than AI alignment or biorisk and that there is no strong EA consensus against projects like SETI. It may be that more people in EA would prefer SETI to cease broadcasting than to maintain the status quo, but I haven’t heard about any particular person actively trying to make them stop/reconsider their methods. (That said, this isn’t my area of expertise and there may be persuasion underway of which I’m unaware.)
I’m mostly concerned about future articles that say something like “EAs are afraid of germs, AIs, and aliens”, with no distinction of the third item from the first two.
([This is not a serious recommendation and something I might well change my mind about if I thought about it for one more hour:] Yes, though my tentative view is that there are fairly strong, and probably decisive, irreversability/option value reasons for holding off actions like SETI until their risks and benefits are better understood. NB the case is more subtle for SETI than METI, but I think the structure is the same: once we know there are aliens there is no way back to our previous epistemic state, and it might be that knowing about aliens is an info hazard.)
If we know that there are aliens and they are sending some information, everybody will try to download their message. It is infohazard.
I agree that information we received from aliens would likely spread widely. So in this sense I agree it would clearly be a potential info hazard.
It seems unclear to me whether the effect of such information spreading would be net good or net bad. If you see reasons why it would probably be net bad, I’d be curious to learn about them.
If such message will be a description of a computer and a program for it, it is net bad. Think about malevolent AI, which anyone able to download from stars.
Such viral message is aimed on the self-replication and thus will eventually convert Earth into its next node which use all our resources to send copies of the message farther.
Simple darwinian logic implies that such viral messages should numerically dominate between all alien messages if any exists. I wrote an article, linked below to discuss the idea in details