(I think Denis Drescher makes a lot of good points, and some of this answer overlaps with points made in that thread.)
My answer would be: âUtilitarian longtermism does not necessarily or directly imply we should put resources towards directed panspermia, nor even that directed panspermia would be good (i.e., if we could have it for free.â
Utilitarianism is about maximising netwellbeing (or something like that), and doesnât intrinsically value things like the amount or survival of life forms or intelligence. The latter things are very likely very instrumentally valuable, but whether and how valuable they are doesnât fall directly out of utilitarianism, and instead relies on some other assumptions or details.
Here are some further considerations that I think come into play:
As noted by edcon, it seems likely that it would take a lot of resources to actually implement directed panspermia, or even develop the ability to âswitch it onâ if needed. So even if that would be good to do, it may not be worth utilitarian longtermists prioritising that.
Though maybe having one person write a paper analysing the idea could be worthwhile. Although also itâs possible that that already exists, and Iâm pretty sure thereâs at least been tangential discussion in various places, such as discussion of the potential downsides by suffering-focused EAs.
âExistential risksâ is not the same as âextinction risksâ. Instead, theyâre the destruction of humanityâs long-term potential (or that of humanityâs âdescendantsâ, so to speak). (Iâm not saying you donât know this, but it seems worth emphasising here.) So directed panspermia could perhaps itself be an existential catastrophe, or increase existential risks. This would be the case if it had irreversible consequences that prevent us from reaching something close to the best future possible, or if it increases the chances of such consequences occurring. Here are three speculative sketches of how that might happen:
Thereâs a proliferation of other civilizations, which are on average less aligned with âgoodâ values than we are (perhaps because weâre in a slightly unlikely good equilibrium; some somewhat relevant discussion here). Perhaps this makes it harder for us to expand and use more resources in a âreally goodâ way. Or perhaps it raises the chances that those civilizations wipe us out.
Thereâs a proliferation of net-negative lives, which we lack the will or ability to improve or âeuthaniseâ.
Thereâs a proliferation of net-positive lives, but we engage in conflicts with them to seize more resources, perhaps based on beliefs or rationalisations that one of the above two scenarios is happening. And this ends up causing a lot of damage.
Directed panspermia might not reduce the biggest current x-risks much in any case. Ord has a box âSecurity among the stars?â in Chapter 7 that discusses the idea that humanity can reduce x-risk by spreading to other planets (which is different to directed panspermia, but similar in some respects). He notes that this only helps with risks that are statistically independent between planets, and that many risks (e.g., unaligned AGI) are likely to be quite correlated, such that, if catastrophe strikes somewhere, itâs likely to spread to other planets too. (Though spreading to other planets would still help with some risks.)
Iâd guess we could capture much of the value of directed panspermia, with much fewer downsides, by accelerating space colonisation. Though even with that, I think Iâd favour us having some portion of a âLong Reflectionâ before going very far with that, essentially for the reason Ord gives in the passage Denis Drescher quotes.
Another option that might capture some of the benefits, with fewer risks, is âleav[ing] a helpful message for future civilizations, just in case humanity dies outâ (discussed in this 80k episode with Paul Christiano).
This article has some good discussion on things like the possibility of intelligent alien life or future evolution on Earth, and the implications of that. That seems relevant here in some ways.
I think metaethics is also important here. In particular, Iâd guess that direct panspermia looks worse from various types of subjectivist perspectives than from various types of (robust) moral realist perspectives, because thatâll influence how happy weâll be with the value systems other civilizations might somewhat ârandomlyâ land on, compared to our own, or influence how ârandomâ we think their value systems will be. (This is a quick take, and somewhat unclearly phrased.)
(I think Denis Drescher makes a lot of good points, and some of this answer overlaps with points made in that thread.)
My answer would be: âUtilitarian longtermism does not necessarily or directly imply we should put resources towards directed panspermia, nor even that directed panspermia would be good (i.e., if we could have it for free.â
Utilitarianism is about maximising net wellbeing (or something like that), and doesnât intrinsically value things like the amount or survival of life forms or intelligence. The latter things are very likely very instrumentally valuable, but whether and how valuable they are doesnât fall directly out of utilitarianism, and instead relies on some other assumptions or details.
Here are some further considerations that I think come into play:
As noted by edcon, it seems likely that it would take a lot of resources to actually implement directed panspermia, or even develop the ability to âswitch it onâ if needed. So even if that would be good to do, it may not be worth utilitarian longtermists prioritising that.
Though maybe having one person write a paper analysing the idea could be worthwhile. Although also itâs possible that that already exists, and Iâm pretty sure thereâs at least been tangential discussion in various places, such as discussion of the potential downsides by suffering-focused EAs.
âExistential risksâ is not the same as âextinction risksâ. Instead, theyâre the destruction of humanityâs long-term potential (or that of humanityâs âdescendantsâ, so to speak). (Iâm not saying you donât know this, but it seems worth emphasising here.) So directed panspermia could perhaps itself be an existential catastrophe, or increase existential risks. This would be the case if it had irreversible consequences that prevent us from reaching something close to the best future possible, or if it increases the chances of such consequences occurring. Here are three speculative sketches of how that might happen:
Thereâs a proliferation of other civilizations, which are on average less aligned with âgoodâ values than we are (perhaps because weâre in a slightly unlikely good equilibrium; some somewhat relevant discussion here). Perhaps this makes it harder for us to expand and use more resources in a âreally goodâ way. Or perhaps it raises the chances that those civilizations wipe us out.
Thereâs a proliferation of net-negative lives, which we lack the will or ability to improve or âeuthaniseâ.
Thereâs a proliferation of net-positive lives, but we engage in conflicts with them to seize more resources, perhaps based on beliefs or rationalisations that one of the above two scenarios is happening. And this ends up causing a lot of damage.
Directed panspermia might not reduce the biggest current x-risks much in any case. Ord has a box âSecurity among the stars?â in Chapter 7 that discusses the idea that humanity can reduce x-risk by spreading to other planets (which is different to directed panspermia, but similar in some respects). He notes that this only helps with risks that are statistically independent between planets, and that many risks (e.g., unaligned AGI) are likely to be quite correlated, such that, if catastrophe strikes somewhere, itâs likely to spread to other planets too. (Though spreading to other planets would still help with some risks.)
Iâd guess we could capture much of the value of directed panspermia, with much fewer downsides, by accelerating space colonisation. Though even with that, I think Iâd favour us having some portion of a âLong Reflectionâ before going very far with that, essentially for the reason Ord gives in the passage Denis Drescher quotes.
Another option that might capture some of the benefits, with fewer risks, is âleav[ing] a helpful message for future civilizations, just in case humanity dies outâ (discussed in this 80k episode with Paul Christiano).
This article has some good discussion on things like the possibility of intelligent alien life or future evolution on Earth, and the implications of that. That seems relevant here in some ways.
I think metaethics is also important here. In particular, Iâd guess that direct panspermia looks worse from various types of subjectivist perspectives than from various types of (robust) moral realist perspectives, because thatâll influence how happy weâll be with the value systems other civilizations might somewhat ârandomlyâ land on, compared to our own, or influence how ârandomâ we think their value systems will be. (This is a quick take, and somewhat unclearly phrased.)