Since we’re already in existential danger due to AI risk, it’s not obvious that we shouldn’t read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:
If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we’d know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease. But I’m not sure, it’s a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?
Since we’re already in existential danger due to AI risk, it’s not obvious that we shouldn’t read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:
If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we’d know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I’m not sure, it’s a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?