Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as ‘far fetched’. Democratisation is a critical component (along with apoliticisation).
I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I’ve worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I’ve worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I’ve previously called for citizen juries and public deliberation and experimental philosophy in this space (here), and for apolitical, aggregative processes (here), as well as calling for better publicly facing national risk (and xrisk) communication and prioritisation tools (under review with Risk Analysis).
Some key points I appreciated or reflected on in your paper were:
The fact that empirical and normative assumptions are often masked by tools and frameworks
The distinction between extinction risk and existential risk.
The questioning of total utilitarianism (I often prefer a maximin approach, also with consideration of important [not necessarily maximising] value obtained from honouring treaties, equity, etc)
I’ve never found the ‘astronomical waste’ claims hold up particularly well under certain resolutions of Fermi’s paradox (basically I doubt the moral and empirical claims of TUA and strong longtermism, and yet I am fully committed to ERS)
The point about equivocating over near-term nuclear war and billion year stagnation
Clarity around Ord’s 1 in 6 (extinction/existential) - I’m guilty of conflating this
I note that failing to mitigate ‘mere’ GCRs could also derail certain xrisk mitigation efforts.
Again, great work. This is a useful and important broad survey/stimulus, not every paper needs to take a single point and dive to its bottom. Well done.
Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as ‘far fetched’. Democratisation is a critical component (along with apoliticisation).
I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I’ve worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I’ve worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I’ve previously called for citizen juries and public deliberation and experimental philosophy in this space (here), and for apolitical, aggregative processes (here), as well as calling for better publicly facing national risk (and xrisk) communication and prioritisation tools (under review with Risk Analysis).
Some key points I appreciated or reflected on in your paper were:
The fact that empirical and normative assumptions are often masked by tools and frameworks
The distinction between extinction risk and existential risk.
The questioning of total utilitarianism (I often prefer a maximin approach, also with consideration of important [not necessarily maximising] value obtained from honouring treaties, equity, etc)
I’ve never found the ‘astronomical waste’ claims hold up particularly well under certain resolutions of Fermi’s paradox (basically I doubt the moral and empirical claims of TUA and strong longtermism, and yet I am fully committed to ERS)
The point about equivocating over near-term nuclear war and billion year stagnation
Clarity around Ord’s 1 in 6 (extinction/existential) - I’m guilty of conflating this
I note that failing to mitigate ‘mere’ GCRs could also derail certain xrisk mitigation efforts.
Again, great work. This is a useful and important broad survey/stimulus, not every paper needs to take a single point and dive to its bottom. Well done.