It’s worth noting that it’s realistically possible for surviving to be bad, whereas promoting flourishing is much more robustly good.
Survival is only good if the future it enables is good. This may not be the case. Two plausible examples:
Wild animals generally live bad lives and we never solve this: it’s quite plausible that most animals have bad lives. Animals considerably outnumber humans so I’d say we probably live in a negative world now. Surviving being good may then bank on us solving the problem of wild animal suffering. We don’t currently have great ideas on how to solve wild animal suffering, so making sure we survive might be a bit of a gamble.
We create suffering digital minds: The experience of digital minds may dominate far future EV calculations given how many might be created. We can’t currently be confident they will have good lives—we don’t understand consciousness well enough. Furthermore, the futures where we create digital minds may be the ones where we wanted to “use” them, which could mean them suffering.
Survival could still be great of course. Maybe we’ll solve wild animal suffering, or we’ll have so many humans with good lives that this will outweigh it. Maybe we’ll make flourishing digital minds. But I wanted to flag this asymmetry between promoting survival and promoting flourishing, as the latter is considerably more robust.
I think this is an important point, but my experience is that when you try to put it into practice things become substantially more complex. E.g. in the podcast Will talks about how it might be important to give digital beings rights to protect them from being harmed, but the downside of doing so is that humans would effectively become immediately disempowered because we would be so dramatically outnumbered by digital beings.
It generally seems hard to find interventions which are robustly likely to create flourishing (indeed, “cause humanity to not go extinct” often seems like one of the most robust interventions!).
A lot of people would argue a world full of happy digital beings is a flourishing future, even if they outnumber and disempower humans. This falls out of an anti-speciesist viewpoint.
COWEN: Well, take the Bernard Williams question, which I think you’ve written about. Let’s say that aliens are coming to Earth, and they may do away with us, and we may have reason to believe they could be happier here on Earth than what we can do with Earth. I don’t think I know any utilitarians who would sign up to fight with the aliens, no matter what their moral theory would be.
This is pretty chilling to me, actually. Singer is openly supporting genocide here, at least in theory. (There are also shades of “well, it was ok to push out all those Native Americans because we could use the land to support a bigger population.)
I’m not an expert, but I think you’ve misused the term genocide here.
The UN Definition of Genocide (1948 Genocide Convention, Article II): ”Genocide means any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:
(a) Killing members of the group; ...
Putting aside that homo sapiens isn’t one of the protected groups, the “as such” is commonly interpreted to mean that the victim must be targeted because of their membership of that group and not some incidental reason. In the Singer case, he wouldn’t be targeting humans because they are humans, he’d be targeting them on account of wanting to promote total utility. In a scenario where the aliens aren’t happier, he would fight the aliens.
I’m probably just missing your point here, and what you’re actually getting at is that Singer’s view is simply abhorrent. Maybe, but if you read the full exchange, what he’s saying is that, in a war, he would not choose a side based on species but instead based on what would promote the intrinsic good. Importantly, I don’t think he says he would invite/start the war, only how he would act in a scenario where a war is inevitable.
Even under that definition, I think the aliens sound to me like they intend to eliminate humans, albeit as a means to an end, not an end to itself. If the Armenian genocide happened to be more about securing a strong Turkish state than any sort of Nazi-style belief that the existence of Armenians was itself undesirable because they were someone inherently evil, it wouldn’t mean it wasn’t genocide. (Not sure what the actual truth is on that.) But yes, I am more bothered about it being abhorrent than about whether it meets the vague legal definition of the word “genocide” given by the UN. (Vague because, what is it to destroy “in part”? If a racist kills one person because of their race is that an attempt to destroy a race “in part” and so genocide?)
“Importantly, I don’t think he says he would invite/start the war, only how he would act in a scenario where a war is inevitable.” If someone signed up to fight a war of extermination against Native Americans in 1800 after the war already started, I’m not sure “the war was already inevitable” would be much of a defence.
We’re just getting into the standard utilitarian vs deontology argument. Singer may just double down and say—just because you feel it’s abhorrent, doesn’t mean it is.
There are examples of things that seem abhorrent from a deontological perspective, but good from a utilitarian perspective, and that people are generally in favor of. The bombings of Hiroshima and Nagasaki are perhaps the clearest case.
Personally, I think utilitarianism is the best moral theory we have, but I have some moral uncertainty and so factor in deontological reasoning into how I act. In other words, if something seems like an atrocity, I would have to be very confident that we’d get a lot of payoff to be in favor of it. In the alien example, I think it is baked in that we are pretty much certain it would be better for the aliens to take over—but in practice this confidence would be almost impossible to come by.
I agree that this is in some sense part of a more general utilitarianism vs intuitions thing.
Are people generally in favour of the bombings? Or do you really mean *Americans*? What do people in liberal democracies like say Spain that didn’t participate in WW2 think? People in Nigeria? India? Personally, I doubt you could construct a utilitarian defense of first dropping the bombs on cities rather than starting with a warning shot demonstration at the very least. It is true, I think that war is a case where people in Western liberal democracies tend to believe that some harm to innocent civilians can be justified by the greater good. But it’s also I think true that people in all cultures have a tendency to believe implausible justifications for prima facie very bad actions taken by their countries during wars.
Are people generally in favour of the bombings? Or do you really mean *Americans*? What do people in liberal democracies like say Spain that didn’t participate in WW2 think? People in Nigeria? India? Personally, I doubt you could construct a utilitarian defense of first dropping the bombs on cities rather than starting with a warning shot demonstration at the very least.
I don’t know about globally, but there are a lot of Chinese people, and they generally support the bombings, which has to take us a fair bit of the the way towards general support. (I’m not aware of any research into the views of Indians or Nigerians). And the classic utilitarian defense is that there were a limited number of bombs of unknown reliability, so they couldn’t be wasted—though to be honest, asking for warning shots seems a bit like special pleading. Warning shots are for deterring aggression in the first place—not for after the attacker has already struck, and shows no sign of stopping.
The overwhelming majority of Manhattan Project scientists, as well as the Undersecretary of the Navy, believed there should be a warning shot. It makes total sense from a game theory perspective to do warning shots when you believe your military advantage has significantly increased in a way that significantly change their own calculus.
My point wasn’t necessarily that I believe that most people worldwide think the bombing was wrong, but rather that it’s unlikely JackM has access to what “most people” think worldwide, and that it is plausible for obvious reasons that insofar as he does have a sense of what most Americans think about this, it’s at least very plausible for standard reasons of nationalism and in-group bias that Americans have a more favourable view of the bombings than the world as whole. But “plausible” just means that, not definitely true.
As for the fact that they had few bombs: that is true, and I did briefly think it might enable the utilitarian defence you are giving, but if you think things through carefully, I don’t think it really works all that well. The reason that the bombings pushed Japan towards surrender* is not, primarily, that it was much harder for Japan to fight on once Hiroshima and Nagasaki were gone, but rather the fear that US could drop more bombs. In other words, the Japanese weren’t prepared to risk the US having more bombs ready, or being able to manufacture them quickly. That fear could certainly also have been generated simply by proof that the US had the bomb. I guess you could try and argue a warning shot would have had less psychological impact, but that seems speculative to me.
*There is, I believe, some level of historical debate about how much longer they would have held out anyway, so I am not sure whether the bombings alone were decisive.
That may be fair. Although, if what you’re saying is that the bombings weren’t actually justified when one uses utilitarian reasoning, then the horror of the bombings can’t really be an argument against utilitarianism (although I suppose it could be an argument against being an impulsive utilitarian without giving due consideration to all your options).
Yeah, I didn’t meant to imply you had. This whole Hiroshima convo got us quite off topic. The original point was that Ben was concerned about digital beings outnumbering humans. I think that concern originates from some misplaced feeling that humans have some special status on account of being human.
I agree with the core point, and that was part of my motivation for working on this area. There is a counterargument, as Ben says, which is that any particular intervention to promote Flourishing might be very non-robust.
And there is an additional argument, which is that in worlds in which you have successfully reduced x-risk, the future is more likely to be negative-EV (because worlds in which you have successfully reduced x-risk are more likely to be worlds in which x-risk is high, and those worlds are more likely to be going badly in general (e.g. great power war)).
I don’t think that wild animal suffering is a big consideration here, though, because I expect wild animals to be a vanishingly small fraction of the future population. Digital beings can inhabit a much wider range of environments than animals can, so even just in our own solar system in the future I’d expect there to be over a billion times as many digital beings as wild animals (the sun produces 2 billion times as much energy as lands on Earth); that ratio gets larger when looking to other star systems.
A counter argument to the wild animal point is that some risks may kill humanity but not all wild animals. I wonder if that’s the case for most catastrophic risks.
It’s worth noting that it’s realistically possible for surviving to be bad, whereas promoting flourishing is much more robustly good.
Survival is only good if the future it enables is good. This may not be the case. Two plausible examples:
Wild animals generally live bad lives and we never solve this: it’s quite plausible that most animals have bad lives. Animals considerably outnumber humans so I’d say we probably live in a negative world now. Surviving being good may then bank on us solving the problem of wild animal suffering. We don’t currently have great ideas on how to solve wild animal suffering, so making sure we survive might be a bit of a gamble.
We create suffering digital minds: The experience of digital minds may dominate far future EV calculations given how many might be created. We can’t currently be confident they will have good lives—we don’t understand consciousness well enough. Furthermore, the futures where we create digital minds may be the ones where we wanted to “use” them, which could mean them suffering.
Survival could still be great of course. Maybe we’ll solve wild animal suffering, or we’ll have so many humans with good lives that this will outweigh it. Maybe we’ll make flourishing digital minds. But I wanted to flag this asymmetry between promoting survival and promoting flourishing, as the latter is considerably more robust.
I think this is an important point, but my experience is that when you try to put it into practice things become substantially more complex. E.g. in the podcast Will talks about how it might be important to give digital beings rights to protect them from being harmed, but the downside of doing so is that humans would effectively become immediately disempowered because we would be so dramatically outnumbered by digital beings.
It generally seems hard to find interventions which are robustly likely to create flourishing (indeed, “cause humanity to not go extinct” often seems like one of the most robust interventions!).
A lot of people would argue a world full of happy digital beings is a flourishing future, even if they outnumber and disempower humans. This falls out of an anti-speciesist viewpoint.
Here is Peter Singer commenting on a similar scenario in a conversation with Tyler Cowen:
COWEN: Well, take the Bernard Williams question, which I think you’ve written about. Let’s say that aliens are coming to Earth, and they may do away with us, and we may have reason to believe they could be happier here on Earth than what we can do with Earth. I don’t think I know any utilitarians who would sign up to fight with the aliens, no matter what their moral theory would be.
SINGER: Okay, you’ve just met one.
This is pretty chilling to me, actually. Singer is openly supporting genocide here, at least in theory. (There are also shades of “well, it was ok to push out all those Native Americans because we could use the land to support a bigger population.)
I’m not an expert, but I think you’ve misused the term genocide here.
Putting aside that homo sapiens isn’t one of the protected groups, the “as such” is commonly interpreted to mean that the victim must be targeted because of their membership of that group and not some incidental reason. In the Singer case, he wouldn’t be targeting humans because they are humans, he’d be targeting them on account of wanting to promote total utility. In a scenario where the aliens aren’t happier, he would fight the aliens.
I’m probably just missing your point here, and what you’re actually getting at is that Singer’s view is simply abhorrent. Maybe, but if you read the full exchange, what he’s saying is that, in a war, he would not choose a side based on species but instead based on what would promote the intrinsic good. Importantly, I don’t think he says he would invite/start the war, only how he would act in a scenario where a war is inevitable.
Even under that definition, I think the aliens sound to me like they intend to eliminate humans, albeit as a means to an end, not an end to itself. If the Armenian genocide happened to be more about securing a strong Turkish state than any sort of Nazi-style belief that the existence of Armenians was itself undesirable because they were someone inherently evil, it wouldn’t mean it wasn’t genocide. (Not sure what the actual truth is on that.) But yes, I am more bothered about it being abhorrent than about whether it meets the vague legal definition of the word “genocide” given by the UN. (Vague because, what is it to destroy “in part”? If a racist kills one person because of their race is that an attempt to destroy a race “in part” and so genocide?)
“Importantly, I don’t think he says he would invite/start the war, only how he would act in a scenario where a war is inevitable.” If someone signed up to fight a war of extermination against Native Americans in 1800 after the war already started, I’m not sure “the war was already inevitable” would be much of a defence.
We’re just getting into the standard utilitarian vs deontology argument. Singer may just double down and say—just because you feel it’s abhorrent, doesn’t mean it is.
There are examples of things that seem abhorrent from a deontological perspective, but good from a utilitarian perspective, and that people are generally in favor of. The bombings of Hiroshima and Nagasaki are perhaps the clearest case.
Personally, I think utilitarianism is the best moral theory we have, but I have some moral uncertainty and so factor in deontological reasoning into how I act. In other words, if something seems like an atrocity, I would have to be very confident that we’d get a lot of payoff to be in favor of it. In the alien example, I think it is baked in that we are pretty much certain it would be better for the aliens to take over—but in practice this confidence would be almost impossible to come by.
I agree that this is in some sense part of a more general utilitarianism vs intuitions thing.
Are people generally in favour of the bombings? Or do you really mean *Americans*? What do people in liberal democracies like say Spain that didn’t participate in WW2 think? People in Nigeria? India? Personally, I doubt you could construct a utilitarian defense of first dropping the bombs on cities rather than starting with a warning shot demonstration at the very least. It is true, I think that war is a case where people in Western liberal democracies tend to believe that some harm to innocent civilians can be justified by the greater good. But it’s also I think true that people in all cultures have a tendency to believe implausible justifications for prima facie very bad actions taken by their countries during wars.
I don’t know about globally, but there are a lot of Chinese people, and they generally support the bombings, which has to take us a fair bit of the the way towards general support. (I’m not aware of any research into the views of Indians or Nigerians). And the classic utilitarian defense is that there were a limited number of bombs of unknown reliability, so they couldn’t be wasted—though to be honest, asking for warning shots seems a bit like special pleading. Warning shots are for deterring aggression in the first place—not for after the attacker has already struck, and shows no sign of stopping.
The overwhelming majority of Manhattan Project scientists, as well as the Undersecretary of the Navy, believed there should be a warning shot. It makes total sense from a game theory perspective to do warning shots when you believe your military advantage has significantly increased in a way that significantly change their own calculus.
My point wasn’t necessarily that I believe that most people worldwide think the bombing was wrong, but rather that it’s unlikely JackM has access to what “most people” think worldwide, and that it is plausible for obvious reasons that insofar as he does have a sense of what most Americans think about this, it’s at least very plausible for standard reasons of nationalism and in-group bias that Americans have a more favourable view of the bombings than the world as whole. But “plausible” just means that, not definitely true.
As for the fact that they had few bombs: that is true, and I did briefly think it might enable the utilitarian defence you are giving, but if you think things through carefully, I don’t think it really works all that well. The reason that the bombings pushed Japan towards surrender* is not, primarily, that it was much harder for Japan to fight on once Hiroshima and Nagasaki were gone, but rather the fear that US could drop more bombs. In other words, the Japanese weren’t prepared to risk the US having more bombs ready, or being able to manufacture them quickly. That fear could certainly also have been generated simply by proof that the US had the bomb. I guess you could try and argue a warning shot would have had less psychological impact, but that seems speculative to me.
*There is, I believe, some level of historical debate about how much longer they would have held out anyway, so I am not sure whether the bombings alone were decisive.
That may be fair. Although, if what you’re saying is that the bombings weren’t actually justified when one uses utilitarian reasoning, then the horror of the bombings can’t really be an argument against utilitarianism (although I suppose it could be an argument against being an impulsive utilitarian without giving due consideration to all your options).
I did not use the bombings as an argument against utilitarianism.
Yeah, I didn’t meant to imply you had. This whole Hiroshima convo got us quite off topic. The original point was that Ben was concerned about digital beings outnumbering humans. I think that concern originates from some misplaced feeling that humans have some special status on account of being human.
I agree with the core point, and that was part of my motivation for working on this area. There is a counterargument, as Ben says, which is that any particular intervention to promote Flourishing might be very non-robust.
And there is an additional argument, which is that in worlds in which you have successfully reduced x-risk, the future is more likely to be negative-EV (because worlds in which you have successfully reduced x-risk are more likely to be worlds in which x-risk is high, and those worlds are more likely to be going badly in general (e.g. great power war)).
I don’t think that wild animal suffering is a big consideration here, though, because I expect wild animals to be a vanishingly small fraction of the future population. Digital beings can inhabit a much wider range of environments than animals can, so even just in our own solar system in the future I’d expect there to be over a billion times as many digital beings as wild animals (the sun produces 2 billion times as much energy as lands on Earth); that ratio gets larger when looking to other star systems.
A counter argument to the wild animal point is that some risks may kill humanity but not all wild animals. I wonder if that’s the case for most catastrophic risks.