I take the utilitarian longtermist position to be that we ought to prioritize maximizing the probability that intelligent life is able to take advantage of the cosmic endowment.
I phrase it that way in order to be species agnostic. Given our position of ignorance about intelligent life in the universe, and our significant existential risks we face over the next couple centuries, it seems to me that we can right now increase the chance of intelligent life taking advantage of the cosmic endowment by increasing the chance that life exists beyond earth.
We can do this through directed panspermia, and calculate that with enough seeds emitted, evolution will be able to eventually produce more intelligent life with some probability that counteracts the probability that we destroy ourselves.
I think the decision is a difficult one, much more difficult than it’s been given credit, with the default being to protect the sterility or potential biome of planets during our exploration efforts. However, if our long term plan is to become interplanetary, then we already plan on directed panspermia. Why not buy down the objective risk that there is a universe void of intelligent life through panspermia now? Call it a biotic hedge.
(I think Denis Drescher makes a lot of good points, and some of this answer overlaps with points made in that thread.)
My answer would be: “Utilitarian longtermism does not necessarily or directly imply we should put resources towards directed panspermia, nor even that directed panspermia would be good (i.e., if we could have it for free.”
Utilitarianism is about maximising net wellbeing (or something like that), and doesn’t intrinsically value things like the amount or survival of life forms or intelligence. The latter things are very likely very instrumentally valuable, but whether and how valuable they are doesn’t fall directly out of utilitarianism, and instead relies on some other assumptions or details.
Here are some further considerations that I think come into play:
As noted by edcon, it seems likely that it would take a lot of resources to actually implement directed panspermia, or even develop the ability to “switch it on” if needed. So even if that would be good to do, it may not be worth utilitarian longtermists prioritising that.
Though maybe having one person write a paper analysing the idea could be worthwhile. Although also it’s possible that that already exists, and I’m pretty sure there’s at least been tangential discussion in various places, such as discussion of the potential downsides by suffering-focused EAs.
“Existential risks” is not the same as “extinction risks”. Instead, they’re the destruction of humanity’s long-term potential (or that of humanity’s “descendants”, so to speak). (I’m not saying you don’t know this, but it seems worth emphasising here.) So directed panspermia could perhaps itself be an existential catastrophe, or increase existential risks. This would be the case if it had irreversible consequences that prevent us from reaching something close to the best future possible, or if it increases the chances of such consequences occurring. Here are three speculative sketches of how that might happen:
There’s a proliferation of other civilizations, which are on average less aligned with “good” values than we are (perhaps because we’re in a slightly unlikely good equilibrium; some somewhat relevant discussion here). Perhaps this makes it harder for us to expand and use more resources in a “really good” way. Or perhaps it raises the chances that those civilizations wipe us out.
There’s a proliferation of net-negative lives, which we lack the will or ability to improve or “euthanise”.
There’s a proliferation of net-positive lives, but we engage in conflicts with them to seize more resources, perhaps based on beliefs or rationalisations that one of the above two scenarios is happening. And this ends up causing a lot of damage.
Directed panspermia might not reduce the biggest current x-risks much in any case. Ord has a box “Security among the stars?” in Chapter 7 that discusses the idea that humanity can reduce x-risk by spreading to other planets (which is different to directed panspermia, but similar in some respects). He notes that this only helps with risks that are statistically independent between planets, and that many risks (e.g., unaligned AGI) are likely to be quite correlated, such that, if catastrophe strikes somewhere, it’s likely to spread to other planets too. (Though spreading to other planets would still help with some risks.)
I’d guess we could capture much of the value of directed panspermia, with much fewer downsides, by accelerating space colonisation. Though even with that, I think I’d favour us having some portion of a “Long Reflection” before going very far with that, essentially for the reason Ord gives in the passage Denis Drescher quotes.
Another option that might capture some of the benefits, with fewer risks, is “leav[ing] a helpful message for future civilizations, just in case humanity dies out” (discussed in this 80k episode with Paul Christiano).
This article has some good discussion on things like the possibility of intelligent alien life or future evolution on Earth, and the implications of that. That seems relevant here in some ways.
I think metaethics is also important here. In particular, I’d guess that direct panspermia looks worse from various types of subjectivist perspectives than from various types of (robust) moral realist perspectives, because that’ll influence how happy we’ll be with the value systems other civilizations might somewhat “randomly” land on, compared to our own, or influence how “random” we think their value systems will be. (This is a quick take, and somewhat unclearly phrased.)
We are probably gaining the ability to spread life via directed panspermia (as a feasible option to eliminate correlated risks and build a safety net) decades before we gain the ability to bring civilisation to other solar systems.
The “long reflection” could lead to an increase in biotic ethics, favoring further investments in directed panspermia.