Thanks, cwbakerlee for the comment. Maybe this is somewhat due in part to how much more time I’ve been spending on LessWrong recently than EA Forums, but I have been surprised by the characterizaation of this post as one that seems dismissive of the severity of some disasters. This isn’t what I was hoping for. My mindset in writing it was one of optimism. It was inspired directly by this post plus another conversation I had a friend about how if self-driving cars turn out to be riddled with failures, it could lend much more credibility to AI safety work.
I didn’t intend for this to be a long post, but if I wrote it again, I’d have a section on “reasons this may not be the case.” But I would not soften the message that endurable disasters may be overwhelmingly net positive in expectation. I disagree that non-utilitarian moral systems would generally dismiss the main point of this post. I think that rejecting the idea that disasters can be net-good if they prevent bigger future disasters would be pretty extreme even by commonsense standards. This post does not suggest that these disasters should be caused on purpose. To anyone who points out the inhumanity of sanctioning a constructive disaster, it can easily be pointed out the even greater inhumanity of trading a small number of deaths for a large one. I wouldn’t agree with making the goal of a post like this to appeal to viewpoints that are this myopic. Even considering moral uncertainty, I would discount this viewpoint almost entirely—similarly to how I would discount the idea that pulling the lever in the trolley problem is wrong. To the extent that the inspiring disaster thesis is right (on its own terms), it’s an important consideration. And if so, I don’t find the idea that it should be too taboo to write a post on very tenable.
About COVID-19 in particular, I am not an expert, but I would probably ascribe a fairly low prior to the possibility that increased risks from ineffective containment of novel pathogens in labs would outweigh reduced risks from other adaptations regarding prevention, epidemiology, isolation, medical supply chains, and vaccine development. I am aware of speculation that the current outbreak was the result of a laboratory in Wuhan, but my understanding is that this is not seen as very substantiated. Empirically in the past few decades, it seems that far more deaths have been due to “naturally” occurring disease outbreaks not being handled better than to outbreaks due to pathogens escaping a lab.
When I mentioned the classic trolley problem, that was not to say that it’s analogous. The analogous trolley problem would be a trolley barreling down a track that splits in two and rejoins. On the current course of a trolley there are a number of people drawn from distribution X who will stop the trolley if hit. But if the trolley is diverted to the other side of the fork, it will hit a number of people drawn from distribution Y. The question to ask would be: “What type of difference between X and Y would cause you to not pull the lever and instead work on finding other levers to pull?” Even a Kantian ought to agree that not pulling the lever is good if the mean of Y is greater than the mean of X.
Thanks, cwbakerlee for the comment. Maybe this is somewhat due in part to how much more time I’ve been spending on LessWrong recently than EA Forums, but I have been surprised by the characterizaation of this post as one that seems dismissive of the severity of some disasters. This isn’t what I was hoping for. My mindset in writing it was one of optimism. It was inspired directly by this post plus another conversation I had a friend about how if self-driving cars turn out to be riddled with failures, it could lend much more credibility to AI safety work.
I didn’t intend for this to be a long post, but if I wrote it again, I’d have a section on “reasons this may not be the case.” But I would not soften the message that endurable disasters may be overwhelmingly net positive in expectation. I disagree that non-utilitarian moral systems would generally dismiss the main point of this post. I think that rejecting the idea that disasters can be net-good if they prevent bigger future disasters would be pretty extreme even by commonsense standards. This post does not suggest that these disasters should be caused on purpose. To anyone who points out the inhumanity of sanctioning a constructive disaster, it can easily be pointed out the even greater inhumanity of trading a small number of deaths for a large one. I wouldn’t agree with making the goal of a post like this to appeal to viewpoints that are this myopic. Even considering moral uncertainty, I would discount this viewpoint almost entirely—similarly to how I would discount the idea that pulling the lever in the trolley problem is wrong. To the extent that the inspiring disaster thesis is right (on its own terms), it’s an important consideration. And if so, I don’t find the idea that it should be too taboo to write a post on very tenable.
About COVID-19 in particular, I am not an expert, but I would probably ascribe a fairly low prior to the possibility that increased risks from ineffective containment of novel pathogens in labs would outweigh reduced risks from other adaptations regarding prevention, epidemiology, isolation, medical supply chains, and vaccine development. I am aware of speculation that the current outbreak was the result of a laboratory in Wuhan, but my understanding is that this is not seen as very substantiated. Empirically in the past few decades, it seems that far more deaths have been due to “naturally” occurring disease outbreaks not being handled better than to outbreaks due to pathogens escaping a lab.
When I mentioned the classic trolley problem, that was not to say that it’s analogous. The analogous trolley problem would be a trolley barreling down a track that splits in two and rejoins. On the current course of a trolley there are a number of people drawn from distribution X who will stop the trolley if hit. But if the trolley is diverted to the other side of the fork, it will hit a number of people drawn from distribution Y. The question to ask would be: “What type of difference between X and Y would cause you to not pull the lever and instead work on finding other levers to pull?” Even a Kantian ought to agree that not pulling the lever is good if the mean of Y is greater than the mean of X.