It seems uncooperative with the rest of humanity...‘let people suffer so they’ll learn their lesson.’
is a strawperson for this post. This post argues that the ability for certain disasters to spark future preventative work should be a factor in cause prioritization work. If that argument cannot be properly made by discussing examples or running a simulation, then I do not know how it could be. I would be interested in how you would recommend discussing this if this post was not the right way to do so.
I share in your hope that the attention to biorisks brought about by SARS-CoV-2 will make future generations safer and prevent even more horrible catastrophes from coming to pass.
However, I strongly disagree with your post, and I think you would have done well to heavily caveat the conclusion that this pandemic and other “endurable” disasters “may be overwhelmingly net-positive in expectation.”
Principally, while your core claim might hold water in some utilitarian analyses under certain assumptions, it almost definitely would be greeted with unambiguous opprobrium by other ethical systems, including the “common-sense morality” espoused by most people. As you note (but only in passing), this pandemic truly is an abject “tragedy.”
Given moral uncertainty, I think that, when making a claim as contentious as this one, it’s extremely important to take the time to explicitly consider it by the lights of several plausible ethical standards rather than applying just a single yardstick.
I suspect this lack of due consideration of other mainstream ethical systems underlies Khorton’s objection that the post “seems uncooperative with the rest of humanity.”
In addition, for what it’s worth, I would challenge your argument on its own terms. I’m sad to say that I’m far from convinced that the current pandemic will end up making us safer from worse catastrophes down the line. For example, it’s very possible that a surge in infectious disease research will lead to a rise in the number of scientists unilaterally performing dangerous experiments with pathogens and the likelihood of consequential accidental lab releases. (For more on this, I recommend Christian Enemark’s book Biosecurity Dilemmas, particularly the first few chapters.)
These are thorny issues, and I’d be more than happy to discuss all this offline if you’d like!
Hmm, FWIW I didn‘t think for one second that the author suggested inspiring a disaster and I think that it’s completely fine to post a short argument that doesn‘t go full moral uncertainty. It’s not like the audience is unfamiliar with utilitarian reasoning and that a sketch of an original utilitarian argument should never be understood as an endorsement or call to action. No?
Thanks, cwbakerlee for the comment. Maybe this is somewhat due in part to how much more time I’ve been spending on LessWrong recently than EA Forums, but I have been surprised by the characterizaation of this post as one that seems dismissive of the severity of some disasters. This isn’t what I was hoping for. My mindset in writing it was one of optimism. It was inspired directly by this post plus another conversation I had a friend about how if self-driving cars turn out to be riddled with failures, it could lend much more credibility to AI safety work.
I didn’t intend for this to be a long post, but if I wrote it again, I’d have a section on “reasons this may not be the case.” But I would not soften the message that endurable disasters may be overwhelmingly net positive in expectation. I disagree that non-utilitarian moral systems would generally dismiss the main point of this post. I think that rejecting the idea that disasters can be net-good if they prevent bigger future disasters would be pretty extreme even by commonsense standards. This post does not suggest that these disasters should be caused on purpose. To anyone who points out the inhumanity of sanctioning a constructive disaster, it can easily be pointed out the even greater inhumanity of trading a small number of deaths for a large one. I wouldn’t agree with making the goal of a post like this to appeal to viewpoints that are this myopic. Even considering moral uncertainty, I would discount this viewpoint almost entirely—similarly to how I would discount the idea that pulling the lever in the trolley problem is wrong. To the extent that the inspiring disaster thesis is right (on its own terms), it’s an important consideration. And if so, I don’t find the idea that it should be too taboo to write a post on very tenable.
About COVID-19 in particular, I am not an expert, but I would probably ascribe a fairly low prior to the possibility that increased risks from ineffective containment of novel pathogens in labs would outweigh reduced risks from other adaptations regarding prevention, epidemiology, isolation, medical supply chains, and vaccine development. I am aware of speculation that the current outbreak was the result of a laboratory in Wuhan, but my understanding is that this is not seen as very substantiated. Empirically in the past few decades, it seems that far more deaths have been due to “naturally” occurring disease outbreaks not being handled better than to outbreaks due to pathogens escaping a lab.
When I mentioned the classic trolley problem, that was not to say that it’s analogous. The analogous trolley problem would be a trolley barreling down a track that splits in two and rejoins. On the current course of a trolley there are a number of people drawn from distribution X who will stop the trolley if hit. But if the trolley is diverted to the other side of the fork, it will hit a number of people drawn from distribution Y. The question to ask would be: “What type of difference between X and Y would cause you to not pull the lever and instead work on finding other levers to pull?” Even a Kantian ought to agree that not pulling the lever is good if the mean of Y is greater than the mean of X.
I think it’s pretty safe to say that
is a strawperson for this post. This post argues that the ability for certain disasters to spark future preventative work should be a factor in cause prioritization work. If that argument cannot be properly made by discussing examples or running a simulation, then I do not know how it could be. I would be interested in how you would recommend discussing this if this post was not the right way to do so.
I share in your hope that the attention to biorisks brought about by SARS-CoV-2 will make future generations safer and prevent even more horrible catastrophes from coming to pass.
However, I strongly disagree with your post, and I think you would have done well to heavily caveat the conclusion that this pandemic and other “endurable” disasters “may be overwhelmingly net-positive in expectation.”
Principally, while your core claim might hold water in some utilitarian analyses under certain assumptions, it almost definitely would be greeted with unambiguous opprobrium by other ethical systems, including the “common-sense morality” espoused by most people. As you note (but only in passing), this pandemic truly is an abject “tragedy.”
Given moral uncertainty, I think that, when making a claim as contentious as this one, it’s extremely important to take the time to explicitly consider it by the lights of several plausible ethical standards rather than applying just a single yardstick.
I suspect this lack of due consideration of other mainstream ethical systems underlies Khorton’s objection that the post “seems uncooperative with the rest of humanity.”
In addition, for what it’s worth, I would challenge your argument on its own terms. I’m sad to say that I’m far from convinced that the current pandemic will end up making us safer from worse catastrophes down the line. For example, it’s very possible that a surge in infectious disease research will lead to a rise in the number of scientists unilaterally performing dangerous experiments with pathogens and the likelihood of consequential accidental lab releases. (For more on this, I recommend Christian Enemark’s book Biosecurity Dilemmas, particularly the first few chapters.)
These are thorny issues, and I’d be more than happy to discuss all this offline if you’d like!
Hmm, FWIW I didn‘t think for one second that the author suggested inspiring a disaster and I think that it’s completely fine to post a short argument that doesn‘t go full moral uncertainty. It’s not like the audience is unfamiliar with utilitarian reasoning and that a sketch of an original utilitarian argument should never be understood as an endorsement or call to action. No?
Thanks, cwbakerlee for the comment. Maybe this is somewhat due in part to how much more time I’ve been spending on LessWrong recently than EA Forums, but I have been surprised by the characterizaation of this post as one that seems dismissive of the severity of some disasters. This isn’t what I was hoping for. My mindset in writing it was one of optimism. It was inspired directly by this post plus another conversation I had a friend about how if self-driving cars turn out to be riddled with failures, it could lend much more credibility to AI safety work.
I didn’t intend for this to be a long post, but if I wrote it again, I’d have a section on “reasons this may not be the case.” But I would not soften the message that endurable disasters may be overwhelmingly net positive in expectation. I disagree that non-utilitarian moral systems would generally dismiss the main point of this post. I think that rejecting the idea that disasters can be net-good if they prevent bigger future disasters would be pretty extreme even by commonsense standards. This post does not suggest that these disasters should be caused on purpose. To anyone who points out the inhumanity of sanctioning a constructive disaster, it can easily be pointed out the even greater inhumanity of trading a small number of deaths for a large one. I wouldn’t agree with making the goal of a post like this to appeal to viewpoints that are this myopic. Even considering moral uncertainty, I would discount this viewpoint almost entirely—similarly to how I would discount the idea that pulling the lever in the trolley problem is wrong. To the extent that the inspiring disaster thesis is right (on its own terms), it’s an important consideration. And if so, I don’t find the idea that it should be too taboo to write a post on very tenable.
About COVID-19 in particular, I am not an expert, but I would probably ascribe a fairly low prior to the possibility that increased risks from ineffective containment of novel pathogens in labs would outweigh reduced risks from other adaptations regarding prevention, epidemiology, isolation, medical supply chains, and vaccine development. I am aware of speculation that the current outbreak was the result of a laboratory in Wuhan, but my understanding is that this is not seen as very substantiated. Empirically in the past few decades, it seems that far more deaths have been due to “naturally” occurring disease outbreaks not being handled better than to outbreaks due to pathogens escaping a lab.
When I mentioned the classic trolley problem, that was not to say that it’s analogous. The analogous trolley problem would be a trolley barreling down a track that splits in two and rejoins. On the current course of a trolley there are a number of people drawn from distribution X who will stop the trolley if hit. But if the trolley is diverted to the other side of the fork, it will hit a number of people drawn from distribution Y. The question to ask would be: “What type of difference between X and Y would cause you to not pull the lever and instead work on finding other levers to pull?” Even a Kantian ought to agree that not pulling the lever is good if the mean of Y is greater than the mean of X.