What are your arguments for expecting Alien space-faring civilizations to have similar, or lower (e.g. 0), or higher expected utility than a future Earth-originating space-faring civilization?
For me, the most important factor is whether the aliens are altruistic. If they’re altruistic, they’ll do acausal trade to reduce suffering in other lightcones.
How common altruism is in evolved life seems like a complex question to answer. If you do research it (in particular), I’d be interested to see your conclusions, although it probably wouldn’t be decision-relevant imv (I’d probably still think the best thing I can do is to work on alignment).
There is a way in which the (relative) frequency of altruistic superintelligences can be decision-relevant, though, when certain other conditions are met. Consider what we would want to do to reduce s-risks in each of these two circumstances:
Toy model premise: Earth has a 40% chance of resulting in an aligned ASI and a 1% chance in resulting in an indefinite s-event causer of a kind that does not accept acausal trades.
In the broader universe, we think there are probably more altruistically-used lightcones than lightcones controlled by s-event causers who are willing to engage in acausal trade.[1] That is, we think probably all the s-risks that are preventable through trade will be prevented.
In the broader universe, we think the situation described in (1) is probably not true; we think there are probably less altruists.
In (2)‘s case, where altruism is uncommon (relatively), increasing the amount of altruist lightcones is impactful. Also, in (2)‘s case, making sure we’re not risking creating more theoretically-preventable s-risks would be more impactful (because they won’t actually be prevented, in (2)’s case).
In (1)’s case, reducing the probability we’ll end up somehow creating an s-event-causer that can’t be averted by acausal trade would be more important, while adding another altruist lightcone would be less likely to prevent s-events (because the preventable ones would be prevented anyways)
I don’t know how non-relatively important considerations like this actually are, though.
Edit: Another thing that seems worth saying explicitly, in response to “I am working on a project about estimating alien density”, is that in my model, density per se is not relevant.
(premise: Some s-risks are preventable through acausal trade; namely those caused by entities which also value non-suffering things, and who would not arrange matter into suffering in return for a sufficient amount of those other things being arranged in other lightcones. Although I don’t expect all values will be neatly maximizer-y to begin with, this is just a simplified model)
For me, the most important factor is whether the aliens are altruistic. If they’re altruistic, they’ll do acausal trade to reduce suffering in other lightcones.
How common altruism is in evolved life seems like a complex question to answer. If you do research it (in particular), I’d be interested to see your conclusions, although it probably wouldn’t be decision-relevant imv (I’d probably still think the best thing I can do is to work on alignment).
There is a way in which the (relative) frequency of altruistic superintelligences can be decision-relevant, though, when certain other conditions are met. Consider what we would want to do to reduce s-risks in each of these two circumstances:
Toy model premise: Earth has a 40% chance of resulting in an aligned ASI and a 1% chance in resulting in an indefinite s-event causer of a kind that does not accept acausal trades.
In the broader universe, we think there are probably more altruistically-used lightcones than lightcones controlled by s-event causers who are willing to engage in acausal trade.[1] That is, we think probably all the s-risks that are preventable through trade will be prevented.
In the broader universe, we think the situation described in (1) is probably not true; we think there are probably less altruists.
In (2)‘s case, where altruism is uncommon (relatively), increasing the amount of altruist lightcones is impactful. Also, in (2)‘s case, making sure we’re not risking creating more theoretically-preventable s-risks would be more impactful (because they won’t actually be prevented, in (2)’s case).
In (1)’s case, reducing the probability we’ll end up somehow creating an s-event-causer that can’t be averted by acausal trade would be more important, while adding another altruist lightcone would be less likely to prevent s-events (because the preventable ones would be prevented anyways)
I don’t know how non-relatively important considerations like this actually are, though.
Edit: Another thing that seems worth saying explicitly, in response to “I am working on a project about estimating alien density”, is that in my model, density per se is not relevant.
(premise: Some s-risks are preventable through acausal trade; namely those caused by entities which also value non-suffering things, and who would not arrange matter into suffering in return for a sufficient amount of those other things being arranged in other lightcones. Although I don’t expect all values will be neatly maximizer-y to begin with, this is just a simplified model)