I apologise if I’m missing something as I went over this very quickly.
I think a key objection for me is to the idea that wild animals will be included in space settlement in any significant numbers.
If we do settle space, I expect most of that, outside of this solar system, to be done by autonomous machines rather than human beings. Most easily habitable locations in the universe are not on planets, but rather freestanding in space, using resources from asteroids, and solar energy.
Autonomous intelligent machines will be at a great advantage over animals from Earth, who are horribly adapted to survive a long journey through interstellar space or to thrive on other planets.
In a wave of settlement machines should vastly outpace actual humans and animals as they can travel faster between stars and populate those start systems more rapidly.
If settlement is done by ‘humans’ it seems more likely to be performed by emulated human minds running on computer systems.
In addition to these difficulties, there is no practical reason to bring animals. By that stage of technological development we will surely be eating meat produced without a whole animal, if we eat meat at all. And if we want to enjoy the experience of natural environments on Earth we will be able to do it in virtual reality vastly more cheaply than terraforming the planets we arrive at.
If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so, and pointing out the strong arguments against it.
I worry this is very overconfident speculation about the very far future. I’m inclined to agree with you, but I feel hard-pressed to put more than say 80% odds on it. I think the kind of s-risk nonhuman animal dystopia that Rowe mentions (and has been previously mentioned by Brian Tomasik) seems possible enough to merit significant concern.
(To be clear, I don’t know how much I actually agree with this piece, agree with your counterpoint, or how much weight I’d put on other scenarios, or what those scenarios even are.)
80% seems reasonable. It’s hard to be confident about many things that far out, but:
i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we’ll bring pigs to Alpha Centauri if we go, than whether we’ll ever go to Alpha Centauri.
ii) That we’ll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There’s not much alternative.
iii) Inasmuch as we’re focussing in on (what’s in my opinion) a narrow part of the whole probability space — like flesh and blood humans going to colonise other stars and bringing animals with them — we can develop approaches that seem most likely to work in that particular scenario, rather than finding something that would hypothetically works across the whole space.
I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
And to reiterate for clarity, I’m not taking a particular stance on Abraham’s argument itself—only saying why I think this one particular counterargument doesn’t work for me.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).
I’m not sure that even under the scenario you describe animal welfare doesn’t end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn’t been addressed, emulate their suffering), 2) animals will continue to exist and suffer on our own planet for millennia, or 3) taking an idea from Luke Hecht, there may be vastly more wild “animals” suffering already off-Earth—if we think there are human-esque alien minds, than there are probably vastly more alien wild animals. The emulated minds that descend from humans may have to address cosmic wild animal suffering.
All three of these situations mean that even when the total expected welfare of the human population is incredibly large, the total expected welfare (or potential welfare) of animals may also be incredibly large, and it isn’t easy to see in advance that one would clearly outweigh the other (unless animal life (biological and synthetic) is eradicated relatively early in the timeline compared to the propagation of human life, which is an additional assumption).
Regardless, if all situations where humans are bound to the solar system and many where they leave result in animal welfare dominating, then your credence that animal welfare will continue to dominate should necessarily be higher than your credence that humans will leave the solar system. So neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.
I haven’t attempted any particular expected value calculation, but it doesn’t seem to me like you can conclude immediately that simply because human welfare has the potential to be infinite or extravagantly large, the potential value of working on human welfare is definitely higher. The latter claim instead requires the additional assertion that animal welfare will not also be incredibly or infinitely large, which as I describe above requires further evidence. And, you would also have to account for the fact that wild animal welfare seems vastly more important currently and will be for the near future in that expected value calculation (which I take from your objection being focused on the future, you might already believe?).
If this is your primary objection, at best it seems like it ought to marginally lower your credence that animal welfare will continue to dominate. It strikes me as an extremely narrow possibility among many many possible worlds where animals continue to dominate welfare considerations, and therefore in expectation, we still should think animal welfare will dominate into the future. I’d be interested in what your specific credence is that the situation you outlined will happen?
neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.
I took the argument to mean that artificial sentience will outweigh natural sentience (eg. animals). You seem to be implying that the relevant question is whether there will be more human sentience, or more animal sentience, but I’m not quite sure why. I would predict that most of the sentience that will exist will be neither human or animal.
Ah—I meant human, emulated or organic, since Rob referred to emulated humans in his comment. For less morally weighty digital minds, the same questions RE emulating animal minds apply, though the terms ought to be changed.
Also it seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isn’t making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskill’s paper, for example), and similar sized populations to the present day human population at that.
I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it’s worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people’s moral circles.
If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so,
One way this could happen is if the deep ecologists or people who care about life-in-general “win”, and for some reason have an extremely strong preference for spreading biological life to the stars without regard to sentient suffering.
I’m pretty optimistic this won’t happen however. I think by default we should expect that the future (if we don’t die out), will be predominantly composed of humans and our (digital) descendants, rather than things that look like wild animals today.
Another thing that the analysis leaves out is that even aside from space colonization, biological evolved life is likely to be an extremely inefficient method of converting energy to positive (or negative!) experiences.
I apologise if I’m missing something as I went over this very quickly.
I think a key objection for me is to the idea that wild animals will be included in space settlement in any significant numbers.
If we do settle space, I expect most of that, outside of this solar system, to be done by autonomous machines rather than human beings. Most easily habitable locations in the universe are not on planets, but rather freestanding in space, using resources from asteroids, and solar energy.
Autonomous intelligent machines will be at a great advantage over animals from Earth, who are horribly adapted to survive a long journey through interstellar space or to thrive on other planets.
In a wave of settlement machines should vastly outpace actual humans and animals as they can travel faster between stars and populate those start systems more rapidly.
If settlement is done by ‘humans’ it seems more likely to be performed by emulated human minds running on computer systems.
In addition to these difficulties, there is no practical reason to bring animals. By that stage of technological development we will surely be eating meat produced without a whole animal, if we eat meat at all. And if we want to enjoy the experience of natural environments on Earth we will be able to do it in virtual reality vastly more cheaply than terraforming the planets we arrive at.
If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so, and pointing out the strong arguments against it.
I worry this is very overconfident speculation about the very far future. I’m inclined to agree with you, but I feel hard-pressed to put more than say 80% odds on it. I think the kind of s-risk nonhuman animal dystopia that Rowe mentions (and has been previously mentioned by Brian Tomasik) seems possible enough to merit significant concern.
(To be clear, I don’t know how much I actually agree with this piece, agree with your counterpoint, or how much weight I’d put on other scenarios, or what those scenarios even are.)
80% seems reasonable. It’s hard to be confident about many things that far out, but:
i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we’ll bring pigs to Alpha Centauri if we go, than whether we’ll ever go to Alpha Centauri.
ii) That we’ll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There’s not much alternative.
iii) Inasmuch as we’re focussing in on (what’s in my opinion) a narrow part of the whole probability space — like flesh and blood humans going to colonise other stars and bringing animals with them — we can develop approaches that seem most likely to work in that particular scenario, rather than finding something that would hypothetically works across the whole space.
I agree. However, I suppose under a s-risk longtermist paradigm, a tiny chance of spacefaring turning out in a particular way could still be worth taking action to prevent or even be of utmost importance.
To wit, I think a lot of retorts to Abraham’s argument appear to me to be of the form “well, this seems rather unlikely to happen”, whereas I don’t think such an argument actually succeeds.
And to reiterate for clarity, I’m not taking a particular stance on Abraham’s argument itself—only saying why I think this one particular counterargument doesn’t work for me.
Part of the issue might be the subheading “Space colonization will probably include animals”.
If the heading had been ‘might’, then people would be less likely to object. Many things ‘might’ happen!
Good point. I agree.
That makes sense!
Peter, do you find my arguments in the comments below persuasive? Basically I tried to argue that the relative probability of extremely good outcomes is much higher than the relative probability of extremely bad outcomes, especially when weighted by moral value. (And I think this is sufficiently true for both classical utilitarians and people with a slight negative leaning).
Hey Rob!
I’m not sure that even under the scenario you describe animal welfare doesn’t end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn’t been addressed, emulate their suffering), 2) animals will continue to exist and suffer on our own planet for millennia, or 3) taking an idea from Luke Hecht, there may be vastly more wild “animals” suffering already off-Earth—if we think there are human-esque alien minds, than there are probably vastly more alien wild animals. The emulated minds that descend from humans may have to address cosmic wild animal suffering.
All three of these situations mean that even when the total expected welfare of the human population is incredibly large, the total expected welfare (or potential welfare) of animals may also be incredibly large, and it isn’t easy to see in advance that one would clearly outweigh the other (unless animal life (biological and synthetic) is eradicated relatively early in the timeline compared to the propagation of human life, which is an additional assumption).
Regardless, if all situations where humans are bound to the solar system and many where they leave result in animal welfare dominating, then your credence that animal welfare will continue to dominate should necessarily be higher than your credence that humans will leave the solar system. So neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.
I haven’t attempted any particular expected value calculation, but it doesn’t seem to me like you can conclude immediately that simply because human welfare has the potential to be infinite or extravagantly large, the potential value of working on human welfare is definitely higher. The latter claim instead requires the additional assertion that animal welfare will not also be incredibly or infinitely large, which as I describe above requires further evidence. And, you would also have to account for the fact that wild animal welfare seems vastly more important currently and will be for the near future in that expected value calculation (which I take from your objection being focused on the future, you might already believe?).
If this is your primary objection, at best it seems like it ought to marginally lower your credence that animal welfare will continue to dominate. It strikes me as an extremely narrow possibility among many many possible worlds where animals continue to dominate welfare considerations, and therefore in expectation, we still should think animal welfare will dominate into the future. I’d be interested in what your specific credence is that the situation you outlined will happen?
I took the argument to mean that artificial sentience will outweigh natural sentience (eg. animals). You seem to be implying that the relevant question is whether there will be more human sentience, or more animal sentience, but I’m not quite sure why. I would predict that most of the sentience that will exist will be neither human or animal.
Ah—I meant human, emulated or organic, since Rob referred to emulated humans in his comment. For less morally weighty digital minds, the same questions RE emulating animal minds apply, though the terms ought to be changed.
Also it seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isn’t making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskill’s paper, for example), and similar sized populations to the present day human population at that.
I also expect artificial sentience to vastly outweigh natural sentience in the long-run, though it’s worth pointing out that we might still expect focusing on animals to be worthwhile if it widens people’s moral circles.
One way this could happen is if the deep ecologists or people who care about life-in-general “win”, and for some reason have an extremely strong preference for spreading biological life to the stars without regard to sentient suffering.
I’m pretty optimistic this won’t happen however. I think by default we should expect that the future (if we don’t die out), will be predominantly composed of humans and our (digital) descendants, rather than things that look like wild animals today.
Another thing that the analysis leaves out is that even aside from space colonization, biological evolved life is likely to be an extremely inefficient method of converting energy to positive (or negative!) experiences.