I realise the article excerpt you showed is not an accurate estimation. Marc and Thomas also say:
The record of laboratory incidents and accidental infections in biosafety level 3 (BSL3) laboratories provides a starting point for quantifying risk. Concentrating on the generation of transmissible variants of avian influenza, we provide an illustrative calculation of the sort that would be performed in greater detail in a fuller risk analysis. Previous publications have suggested similar approaches to this problem
...
These numbers should be discussed, challenged, and modified to fit the particularities of specific types of PPP experiments.
So it looks like the calculation above was just an illustrative examples, and EA did not have sufficient data to come to conclusions. Is there any other part of the article that leads you believe the authors had strong faith in their numbers?
Given that the got this so wrong, why should we believe that there other analysis of Global Catastrophic Risk isn’t also extremely flawed?
What did EA get wrong exactly? I guess they made rational decisions in a situation of extreme uncertainty.
Statistical estimation with little historical data is likely to be inaccurate. A virus leak did not turn into a pandemic before.
Furthermore, many accurate estimations are likely to lead to a bad outcome sometimes. If you throw 100 dice enough times, you will get all 1s eventually.
So it looks like the calculation above was just an illustrative examples, and EA did not have sufficient data to come to conclusions. Is there any other part of the article that leads you believe the authors had strong faith in their numbers?
Generally, if you don’t have strong faith in the numbers the way to deal with it is to study it more. I was under the impressiong that understanding global catastrophic risk point of why we have organizations like FLI.
Even if they didn’t accept the numbers the task for an organization like FLI would be to make their own estimate.
To go a bit into history, the reason the moratorium existed in the first place was that within the span of a few weeks 75 US scientists at the CDC got infected with anthrax and FDA employees found 16 forgotten vials of smallpox in storage and this was necessary to weaken the opposition to the moratorium in 2014 to get it passed.
When the evidence for harm is so strong that it forces the hand of politicians, it seems to me reasonable expectation that organizations who’s mission is it to think about global catastrophic risk analyse the harm and have a public position on what they think the risk happens to be. If that’s not what organization like the FLI are for, what’s are they for?
If that’s not what organization like the FLI are for, what’s are they for?
They do their best to gather data, predict events on the base of the data and give recommendations. However data is not perfect, models are not a perfect representation of reality, and recommendations are not necessarily unanimous. To err is human, and mistakes are possible, especially when the foundation of the applied processes contain errors.
Sometimes people just do not have enough information, and certainly nobody can gather information if data does not exist. Still a decision needs to be taken, at least between action vs inaction, and a data-supported expert guess is better than a random guess.
Given a choice, would you prefer nobody carried out the analysis with no possibility of improvement? or would you still let the experts do their job with a reasonable expectation that most of the times, the problems are solved and human conditions improve?
What if their decision had only 10% change of being better than a decision taken without carrying out any analysis? Would you seek expert advice to improve the odd of success, if that was your only option?
I realise the article excerpt you showed is not an accurate estimation. Marc and Thomas also say:
So it looks like the calculation above was just an illustrative examples, and EA did not have sufficient data to come to conclusions. Is there any other part of the article that leads you believe the authors had strong faith in their numbers?
What did EA get wrong exactly? I guess they made rational decisions in a situation of extreme uncertainty.
Statistical estimation with little historical data is likely to be inaccurate. A virus leak did not turn into a pandemic before.
Furthermore, many accurate estimations are likely to lead to a bad outcome sometimes. If you throw 100 dice enough times, you will get all 1s eventually.
Generally, if you don’t have strong faith in the numbers the way to deal with it is to study it more. I was under the impressiong that understanding global catastrophic risk point of why we have organizations like FLI.
Even if they didn’t accept the numbers the task for an organization like FLI would be to make their own estimate.
To go a bit into history, the reason the moratorium existed in the first place was that within the span of a few weeks 75 US scientists at the CDC got infected with anthrax and FDA employees found 16 forgotten vials of smallpox in storage and this was necessary to weaken the opposition to the moratorium in 2014 to get it passed.
When the evidence for harm is so strong that it forces the hand of politicians, it seems to me reasonable expectation that organizations who’s mission is it to think about global catastrophic risk analyse the harm and have a public position on what they think the risk happens to be. If that’s not what organization like the FLI are for, what’s are they for?
They do their best to gather data, predict events on the base of the data and give recommendations. However data is not perfect, models are not a perfect representation of reality, and recommendations are not necessarily unanimous. To err is human, and mistakes are possible, especially when the foundation of the applied processes contain errors.
Sometimes people just do not have enough information, and certainly nobody can gather information if data does not exist. Still a decision needs to be taken, at least between action vs inaction, and a data-supported expert guess is better than a random guess.
Given a choice, would you prefer nobody carried out the analysis with no possibility of improvement? or would you still let the experts do their job with a reasonable expectation that most of the times, the problems are solved and human conditions improve?
What if their decision had only 10% change of being better than a decision taken without carrying out any analysis? Would you seek expert advice to improve the odd of success, if that was your only option?