I think your reasoning is basically correct (as far as I can tell), at least your conclusion that “we should devote more effort into averting existential risks that make such colonization less likely” (given your premise that alien civs would be somewhat valuable). I feel like your appeal to evolution is a somewhat convincing first-pass argument for your premise that alien civilizations would likely instantiate value, though I wouldn’t be that surprised if that premise turned out to be wrong after more scrutiny. I feel less sure about your claims regarding which catastrophes would make alien civs more or less likely, though I would agree at least crudely/directionally. (But e.g. I think there are many ‘AI catastrophes’ that would be quite compatible with alien civs.)
Anecdotally, I’ve frequently talked about this and similar considerations with EAs interested in existential risk strategy (or “cause prioritization” or “macrostrategy”). (I don’t recall having talked about this issue with Nick Bostrom specifically, but I feel maybe 95% confident that he’s familiar with it.) My guess is that the extent to which people have considered questions around aliens is significantly underrepresented in published written sources, though on the other hand I’m not aware of thinking that goes much beyond your post in depth.
I’m wondering what you mean when you say, “I think there are many ‘AI catastrophes’ that would be quite compatible with alien civs.” Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth? I’m having a hard time thinking of any and would like to know your thoughts on the matter.
Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth?
Yes, that’s what I think.
First, consider the ‘classic’ Bostrom/Yudkowsky catastrophe scenario in which a single superintelligent agent with misaligned goals kills everyone and then, for instrumental reasons, expands into the universe. I agree that this would be a significant obstacle for alien civilization (though not totally impossible—e.g. there’s some, albeit perhaps tiny, chance that an expanding alien civilization could be a more powerful adversary, or there could be some kind of trade, or …).
However, I don’t think we can be highly confident that this is how an existential catastrophe due to AI would look like. Cf. Christiano’s What failure looks like,Drexler’s Reframing Superintelligence, and also recent posts on AI risk arguments/scenarios by Tom Sittler and Richard Ngo. On some of the scenarios discussed there, I think it’s hard to see whether they’d result in an obstacle to alien civilizations or not.
More broadly, I’d be wary to assign very high confidence to any feature of a post-AI catastrophe world. AI that could cause an existential catastrophe is a technology we don’t currently possess and cannot anticipate in all its details—therefore, I think it’s quite likely that an actual catastrophe based on such AI would in at least some ways have unanticipated properties, i.e., would at least not completely fall into any category of catastrophe we currently anticipate. Relatively robust high-level considerations such as Omohundro’s convergent instrumental goal argument can give us good reasons to nevertheless assign significant credence to some properties (e.g., a superintelligent AI agent seems likely to acquire resources), but I don’t think they suffice for >90% credence in anything.
I think your reasoning is basically correct (as far as I can tell), at least your conclusion that “we should devote more effort into averting existential risks that make such colonization less likely” (given your premise that alien civs would be somewhat valuable). I feel like your appeal to evolution is a somewhat convincing first-pass argument for your premise that alien civilizations would likely instantiate value, though I wouldn’t be that surprised if that premise turned out to be wrong after more scrutiny. I feel less sure about your claims regarding which catastrophes would make alien civs more or less likely, though I would agree at least crudely/directionally. (But e.g. I think there are many ‘AI catastrophes’ that would be quite compatible with alien civs.)
Anecdotally, I’ve frequently talked about this and similar considerations with EAs interested in existential risk strategy (or “cause prioritization” or “macrostrategy”). (I don’t recall having talked about this issue with Nick Bostrom specifically, but I feel maybe 95% confident that he’s familiar with it.) My guess is that the extent to which people have considered questions around aliens is significantly underrepresented in published written sources, though on the other hand I’m not aware of thinking that goes much beyond your post in depth.
I’m wondering what you mean when you say, “I think there are many ‘AI catastrophes’ that would be quite compatible with alien civs.” Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth? I’m having a hard time thinking of any and would like to know your thoughts on the matter.
Yes, that’s what I think.
First, consider the ‘classic’ Bostrom/Yudkowsky catastrophe scenario in which a single superintelligent agent with misaligned goals kills everyone and then, for instrumental reasons, expands into the universe. I agree that this would be a significant obstacle for alien civilization (though not totally impossible—e.g. there’s some, albeit perhaps tiny, chance that an expanding alien civilization could be a more powerful adversary, or there could be some kind of trade, or …).
However, I don’t think we can be highly confident that this is how an existential catastrophe due to AI would look like. Cf. Christiano’s What failure looks like, Drexler’s Reframing Superintelligence, and also recent posts on AI risk arguments/scenarios by Tom Sittler and Richard Ngo. On some of the scenarios discussed there, I think it’s hard to see whether they’d result in an obstacle to alien civilizations or not.
More broadly, I’d be wary to assign very high confidence to any feature of a post-AI catastrophe world. AI that could cause an existential catastrophe is a technology we don’t currently possess and cannot anticipate in all its details—therefore, I think it’s quite likely that an actual catastrophe based on such AI would in at least some ways have unanticipated properties, i.e., would at least not completely fall into any category of catastrophe we currently anticipate. Relatively robust high-level considerations such as Omohundro’s convergent instrumental goal argument can give us good reasons to nevertheless assign significant credence to some properties (e.g., a superintelligent AI agent seems likely to acquire resources), but I don’t think they suffice for >90% credence in anything.
That sounds reasonable.