The article suggests, “This service is for anyone who is seriously interested in working on mitigating catastrophic biological risks, like the risk of an engineered pandemic.”
It’s great that there are skilled people addressing this threat, and it seems very likely they will be able to make a constructive contribution which reduces the risk of an engineered pandemic which threatens civilization itself. The question I hope we are asking would be, is reducing the risk of an an engineered pandemic sufficient?
The key issue with genetic engineering, or any technology, seems to be the scale of the power involved. A simple example can help illustrate the issue of scale...
In WWII we threw conventional explosives at each other with wild abandon all over the planet. But because conventional explosives are of limited scale, and don’t have the power to collapse the system as a whole, we could make this mistake, clean up the mess, try to learn the lessons, and continue on with further progress. This is the paradigm which defines the past.
If we have a WWIII with nuclear weapons then cleaning up the mess, learning the lessons, and continuing with progress will take place, if it happens at all, over much longer time frames. Nobody alive at the time of such a war will live to see any recovery that might eventually occur. This is the paradigm which defines the future.
SUCCESS: Imperfect management worked with conventional explosives because the scale of these weapons is limited, incapable of crashing the systems which are required for recovery.
FAILURE: Imperfect management will not work with nuclear weapons, because the scale of these powers is vastly greater, and can be credibly proposed capable of destroying the systems required for recovery.
If the power of genetic engineering is of existential scale such as is the case with nuclear weapons, then it would seem to follow that reducing the risk of a genetic global catastrophe is not sufficient. Instead, mitigating the risk seems more like a game of Russian roulette where one gets away with repeatedly pulling the trigger, until the one bad day when one doesn’t.
A simple rule governs much of human history. If it’s possible for something to go wrong, sooner or later it likely will.
The article suggests, “This service is for anyone who is seriously interested in working on mitigating catastrophic biological risks, like the risk of an engineered pandemic.”
It’s great that there are skilled people addressing this threat, and it seems very likely they will be able to make a constructive contribution which reduces the risk of an engineered pandemic which threatens civilization itself. The question I hope we are asking would be, is reducing the risk of an an engineered pandemic sufficient?
The key issue with genetic engineering, or any technology, seems to be the scale of the power involved. A simple example can help illustrate the issue of scale...
In WWII we threw conventional explosives at each other with wild abandon all over the planet. But because conventional explosives are of limited scale, and don’t have the power to collapse the system as a whole, we could make this mistake, clean up the mess, try to learn the lessons, and continue on with further progress. This is the paradigm which defines the past.
If we have a WWIII with nuclear weapons then cleaning up the mess, learning the lessons, and continuing with progress will take place, if it happens at all, over much longer time frames. Nobody alive at the time of such a war will live to see any recovery that might eventually occur. This is the paradigm which defines the future.
SUCCESS: Imperfect management worked with conventional explosives because the scale of these weapons is limited, incapable of crashing the systems which are required for recovery.
FAILURE: Imperfect management will not work with nuclear weapons, because the scale of these powers is vastly greater, and can be credibly proposed capable of destroying the systems required for recovery.
If the power of genetic engineering is of existential scale such as is the case with nuclear weapons, then it would seem to follow that reducing the risk of a genetic global catastrophe is not sufficient. Instead, mitigating the risk seems more like a game of Russian roulette where one gets away with repeatedly pulling the trigger, until the one bad day when one doesn’t.
A simple rule governs much of human history. If it’s possible for something to go wrong, sooner or later it likely will.