Should have a/an/multiple/on-going Asilomar conference(s).
And that treaties would be enacted as a result of these conferences/talks. Whereby all nation/states sign that they will not make bioweapons or not weaponize newer developments.
With help/support and oversight via UNIDIR and also private institutions including but not necessarily limited to the ‘Secure World Foundation.’
2. Also, in his talks, Mr. Ray Kurzweil highlights that the Asilomar conference from 1975 has been useful and as it relates to bringing effective regulation (relating to the area of recombinant DNA). (Paraphrased). Link is below.
3. Intent: I feel that this is/has been an on-going discussion. A sensitive issue at that. Particularly with unintended consequences for possibly enacting any measures that could cause accidental harm. With either an innocent person/group being targeted with a counter-measure approach. As well, the possibility that freedoms/liberties/real innovation could take a negative hit as a result of the measures taken.
Note: The previous wikipedia entry for the ‘Global Catastrophic Risk’ page had a ‘Likelihood’ section. (Since removed). It cited the ‘Future of Humanity’s Technical Report from 2008’ as a source (link below. But I have not verified via the actual source via FLI). Whereby the ‘Estimated probability for human extinction before 2100’ was categorized as (in random order): a) 0.05% for a Natural pandemic and b) 2% for an Engineered pandemic.
On one end of the spectrum. I am thinking that better intelligence is needed. As well, some ability to be able to go back in slices of time (without invoking relativity or however space- time functions).
On the other end of the spectrum. If a pandemic with a high mortality rate does emerge. I would reserve my comments on this. But I mentioned this somewhere and the need to War-Game existential risk. But also peace-game existential hope. https://en.wikipedia.org/wiki/Wargame
Responding to part of your comment: I think OpenPhil or some other organization is actively looking for people to help put better numbers (I think they called them base rates) for risk from engineered pandemics. If you cannot find this “call for proposals” within 20 minutes of Googling and browsing the EAF, I can try to dig that up.
Yes that is super relevant. Hopefully there would even be information coming out from such work that could help people working on refuges/shelters calculate likely reductions in biological risk. This would help both assess different proposed solutions against each other as well as help inform whether a refuge/shelter should be built at all (it seems initially that one would want a substantial reduction in the risk in order to proceed with something as ambitious as a refuge/shelter).
1. I would think that we, as a species:
Should have a/an/multiple/on-going Asilomar conference(s).
And that treaties would be enacted as a result of these conferences/talks. Whereby all nation/states sign that they will not make bioweapons or not weaponize newer developments.
With help/support and oversight via UNIDIR and also private institutions including but not necessarily limited to the ‘Secure World Foundation.’
2. Also, in his talks, Mr. Ray Kurzweil highlights that the Asilomar conference from 1975 has been useful and as it relates to bringing effective regulation (relating to the area of recombinant DNA). (Paraphrased). Link is below.
https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA
3. Intent: I feel that this is/has been an on-going discussion. A sensitive issue at that. Particularly with unintended consequences for possibly enacting any measures that could cause accidental harm. With either an innocent person/group being targeted with a counter-measure approach. As well, the possibility that freedoms/liberties/real innovation could take a negative hit as a result of the measures taken.
Note: The previous wikipedia entry for the ‘Global Catastrophic Risk’ page had a ‘Likelihood’ section. (Since removed). It cited the ‘Future of Humanity’s Technical Report from 2008’ as a source (link below. But I have not verified via the actual source via FLI). Whereby the ‘Estimated probability for human extinction before 2100’ was categorized as (in random order): a) 0.05% for a Natural pandemic and b) 2% for an Engineered pandemic.
https://en.wikipedia.org/w/index.php?title=Global_catastrophic_risk&oldid=999079110
On one end of the spectrum. I am thinking that better intelligence is needed. As well, some ability to be able to go back in slices of time (without invoking relativity or however space- time functions).
On the other end of the spectrum. If a pandemic with a high mortality rate does emerge. I would reserve my comments on this. But I mentioned this somewhere and the need to War-Game existential risk. But also peace-game existential hope. https://en.wikipedia.org/wiki/Wargame
Responding to part of your comment: I think OpenPhil or some other organization is actively looking for people to help put better numbers (I think they called them base rates) for risk from engineered pandemics. If you cannot find this “call for proposals” within 20 minutes of Googling and browsing the EAF, I can try to dig that up.
For convenience: https://forum.effectivealtruism.org/posts/xFsmibHafAu8APgiS/request-for-proposals-help-open-philanthropy-quantify
(Deadline has passed, but it seems likely to be an ongoing need.)
Yes that is super relevant. Hopefully there would even be information coming out from such work that could help people working on refuges/shelters calculate likely reductions in biological risk. This would help both assess different proposed solutions against each other as well as help inform whether a refuge/shelter should be built at all (it seems initially that one would want a substantial reduction in the risk in order to proceed with something as ambitious as a refuge/shelter).