A longtermist case for directed panspermia
I previously asked whether Longtermism entails that we should prepare what I call a biotic hedge, in case we should fail to prevent existential catastrophe. I received some very helpful feedback. Here is the original question: https://forum.effectivealtruism.org/posts/FvfJSkJYNZwRYL6XD/does-utilitarian-longtermism-imply-directed-panspermia
The biotic hedge refers to directed panspermia, which is where we intentionally send out the seeds of life to other planets or moons around the galaxy. This is to hedge against the possibility that all life on earth is wiped out. We know that in 4 billion years the sun will expand and engulf earth, which would constitute such an event.
Longtermisn is the view that we should evaluate our actions based on how they will affect future beings, not just present beings.
A common objection to directed panspermia is that it constitutes an irreversible, possibly harmful action (in a epistemic sense), which should be avoided if possible.
Consider a few trolley problems. They take place in Trolleyville, where it’s a possibility that persons are tied to tracks, but that’s all you know. Assessing the probability of this is a challenge.
Tunnel and Wall 1. A trolley is careening toward a wall. Five people are tied to the wall. A second track diverts into a tunnel, which you cannot see into. You have the option to pull a switch and divert the trolley into the tunnel. The trolley will surely kill the people if it hits them and the wall. Who knows what might happen in the tunnel. This is Trolleyville, after all. Should you pull the switch?
It seems obvious to me that you should.
Tunnel and Wall 2. Now suppose the same tunnel and wall setup, but instead the five people are on the trolley. Should you pull the switch?
Again, it seems like you should, even though this is Trolleyville.
Tunnel and Wall 3. Again, we have the same tunnel and wall setup, but this time we have only possible future persons on the trolley (fertilized embryos, maybe). If they hit the wall, they will never come to exist. If they go in the tunnel, they might.
In all cases, we take irreversible, potentially harmful action to save some morally relevant persons. Cases 1 and 2 suffice to show that these conditions are not enough to make action impermissible.
Clearly I must defend case 3 a bit. I do not expect the same intuitive support. But I do think that it is morally congruent in relevant ways.
First, I draw on the longtermist claim that future persons matter morally, and this mattering is not discounted by time, like money is. One future person matters morally just as much as one presently existing person.
The moral value of actions that affect them may be discounted by epistemic considerations, like by the probability that an action has a particular effect on them. But not by time alone.
These possible persons also need not be humans. I draw on standard claims about speciesism, that humans are not the only beings that matter morally. I think the case is morally the same whether the possible future persons are humans or not.
So just as in cases 1 and 2, if in case 3 we pull the switch, we send a trolley careening down a tunnel. We don’t know if any people or morally relevant beings are on the tunnel track in any case. We have to work with the information we have. In cases 1 and 2, it seems like the information we have requires us to pull the switch. But there is no moral difference in case 3. So even without the intuition, it seems we must still pull the switch.
I can map these three cases to some more concrete existential risk examples, which I will now do.
Case 1 is an asteroid. A big one. It would wipe out all life on earth. If possible, we should divert the asteroid. We are not sure if it will hit and destroy some other morally relevant life elsewhere in the universe. But we should divert it anyway.
Case 2 is human spacefaring. If we can achieve this, we should, because it saves humanity from certain doom on Earth due to the expansion of the Sun.
Case 3 is directed panspermia. If we don’t do it, we ensure that future persons in the Earth-originated tree of life get engulfed by the Sun, while sending Earth-originated life to the universe saves it from this fate.
The ignorance about what might be in the tunnel does not absolve us of the responsibility of saving morally relevant beings from a certain doom. It shouldn’t absolve us of the responsibility to preserve future persons obtainable by directed panspermia.
I believe I can strengthen the intuition a bit if I appeal to some conceivable scientific advances that might arrive soon. Namely, it may be possible to develop biological material that can be stored on a ship, and when it comes into contact with habitable environments, it develops into human beings just as a zygote does. It should be intuitive that if we could do this, it would be permissible, given longtermist values. There is no distinction between these humans and those send spacefaring on a ship, other than the social discontinuity of birth.
But if we are to avoid the accusation of speciesism, we should be willing to send other such biological material that will develop into other morally relevant beings. I contend that this is directed panspermia. It seems reasonable to claim that eventually morally relevant beings will evolve, if not full nonhuman persons, from appropriately selected biological samples.
Consider two more cases.
Tunnel and Wall 4. Five people, including you, are in a trolley careening toward a wall. In front of the tunnel, tied on the track, is one person. Is it permissible for you to pull the switch inside the trolley to veer into the tunnel, killing the one who is tied up, but saving the five in the trolley?
Tunnel and Wall 5. Five people, including you, are on a trolley careening toward a wall. You can pull a switch and divert into a tunnel. As far as you can see, nothing is tied to the track, but this is Trolleyville.
The analogous existential risk cases might be as follows. For case 4, an asteroid is heading toward Earth. Suppose we can save ourselves only by diverting it into Europa, and suppose we know it has life. Case 5 is like this, but we do not know if life is on Europa.
Between cases 4 and 5, 5 is clearly less morally troubling. I think case 4 is such that pulling the switch is morally permissible. But then case 5 is morally permissible. From a moral, longtermist perspective, 5 is nearly identical to case 3. The only difference is that you and four others are on the trolley in 5, while in 3 the trolley has possible future persons. According to longtermism, this is not a morally relevant difference. Case 5 seems permissible, if not obligatory. So case 3 should be permissible too, if not obligatory. Thus, given what we now know, it seems permissible to plan for and engage in directed panspermia, to ensure that possible future persons can exist beyond the expansion of the Sun.
This has implications for how we approach space exploration. Careful sterilization of spacecraft and instruments makes sense from a scientific perspective, where we do not want to contaminate the environment in which we are trying to make discoveries. But the moral case for avoiding panspermia is less strong, especially when we do not know one way or the other whether life exists elsewhere. And even in some cases, it may be permissible to engage in directed panspermia if the risk to Earth’s tree of life is great enough, even if there is reason to believe some primitive life exists on a target system.
Executive summary: The post argues that longtermism, the view that the effects of our actions on future beings have significant moral weight, provides a case for engaging in directed panspermia to preserve life from existential threats.
Key points:
Longtermism implies future beings, including nonhuman or potential future beings, have moral worth akin to present beings.
Actions that irreversibly affect future beings may be permissible if they mitigate existential threats, even if outcomes are uncertain.
Directed panspermia to spread life mitigates existential risks like the Sun engulfing Earth, so is permissible by longtermism.
Advancing technology may enable creating and preserving more sophisticated life via panspermia.
These arguments suggest more permissive approaches to panspermia in space exploration to protect future beings.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.