Thanks! I think in terms of insight that ‘Classifying global catastrophic risks’ presents a novel way of thinking about GCRs looking at critical systems disrupted and potential systems affected across risks. This could be helpful in both identifying new potential GCRs, and points of intervention. I think there are a number of follow-on pieces of research that could lead from it, and it could also be a good way for people from a number of domains to think about how they can use their expertise to engage with GCR research.
From a methodology point of view, I think the biological horizon-scan (’20 emerging issues in biological engineering’) was very successful. While it was aimed more broadly at adapting an expertise-elicitation-and-aggregation technique to anticipating relevant advances and challenges in biological engineering (so only a subset of issues in the final paper are specifically risk), it came together well, and demonstrated proof-of-concept (concept originally being that this sort of technique could be useful in tech-GCR-relevant foresight). It’s been extremely well-received within the research community, was presented at the 2017 Biological Weapons Convention and by invitation at the 2018 Organisation for the Prohibition of Chemical Weapons Scientific Advisory Board meeting. A major issue at the BWC is that the BWC is underfunded and under-supported in various ways, and is struggling to keep up with advances in the science and tech (for more on this and challenges for the BWC, see our 2017 report here). Participants in the 2017 meeting commented that exercises like our horizon-scan were very useful to the BWC for that reason.
I’d like to see us do more of this, including drilling down more on emerging biothreats more specifically, and applying the technique (and similar) to other risk/emerging tech domains.
Re: papers in the original review, I was very pleased with how the Malicious AI report turned out. It resulted in a landmark report that has significantly influenced the conversation. And while the topics were more near-term AI, it provided an opportunity to introduce a number of principles that may be influential as we move closer to transformative AI: from issues of responsibility of research leaders, security best practices, to different practices on open-ness with regards to certain types of research, and ideas around monitoring/tracking certain things like hardware.
I was also v pleased with how Natalie Jones (et al)’s paper turned out. Natalie is a PhD student in Cambridge who has been mentored during some of her time here by CSER’s Julius Weitzdorfer. What is particularly satisfying here, as the OP pointed to, is that while the paper was being finalised, CSER was able to support Natalie and a team of students in pushing through one of the key recommendations of the paper, and establishing an All-Party Parliamentary Group on Future Generations (for which CSER is playing role of secretariat, and CSER researchers/senior advisors playing advisory role).
So I would say a continued stream of papers that do some combination of the following would be a good scenario: (a) opening up new ways of analysing GCRs (b) developing new methodologies for foresight and anticipating risk or risk-relevant advances (c) producing outputs that are useful for institutions with key roles in managing global risks (d) result in implementable recommendations that we can help to implement (e) introduce concepts in contemporary tech, policy and risk that will be useful for future challenges.
These can be well-complemented by high-quality academic papers chipping away at GCR-relevant issues like biodiversity loss, international risk governance, AI foresight and governance, issues of global ethics and future generations etc in a more incremental/fine-grained fashion. Plus targets of opportunity like emerging areas of risk that aren’t quite in anyone’s domain to work on properly at present and are thus going under-treated (e.g. a workshop led by Shahar Avin on emerging risks from modernising infrastructure around nuclear command and control, esp looking at the cyber angle, might fall into this category—paper just submitted). And several papers that will be coming out in 2019 will be focused on analysing the evidence base for different claims around Xrisks/GCRs, as well as ways of collecting and aggregating research relevant to Xrisk/GCR across fields, which we think will be helpful in Xrisk/GCR’s move towards being a ‘mature’ field.
Thanks! I think in terms of insight that ‘Classifying global catastrophic risks’ presents a novel way of thinking about GCRs looking at critical systems disrupted and potential systems affected across risks. This could be helpful in both identifying new potential GCRs, and points of intervention. I think there are a number of follow-on pieces of research that could lead from it, and it could also be a good way for people from a number of domains to think about how they can use their expertise to engage with GCR research.
From a methodology point of view, I think the biological horizon-scan (’20 emerging issues in biological engineering’) was very successful. While it was aimed more broadly at adapting an expertise-elicitation-and-aggregation technique to anticipating relevant advances and challenges in biological engineering (so only a subset of issues in the final paper are specifically risk), it came together well, and demonstrated proof-of-concept (concept originally being that this sort of technique could be useful in tech-GCR-relevant foresight). It’s been extremely well-received within the research community, was presented at the 2017 Biological Weapons Convention and by invitation at the 2018 Organisation for the Prohibition of Chemical Weapons Scientific Advisory Board meeting. A major issue at the BWC is that the BWC is underfunded and under-supported in various ways, and is struggling to keep up with advances in the science and tech (for more on this and challenges for the BWC, see our 2017 report here). Participants in the 2017 meeting commented that exercises like our horizon-scan were very useful to the BWC for that reason.
I’d like to see us do more of this, including drilling down more on emerging biothreats more specifically, and applying the technique (and similar) to other risk/emerging tech domains.
Re: papers in the original review, I was very pleased with how the Malicious AI report turned out. It resulted in a landmark report that has significantly influenced the conversation. And while the topics were more near-term AI, it provided an opportunity to introduce a number of principles that may be influential as we move closer to transformative AI: from issues of responsibility of research leaders, security best practices, to different practices on open-ness with regards to certain types of research, and ideas around monitoring/tracking certain things like hardware.
I was also v pleased with how Natalie Jones (et al)’s paper turned out. Natalie is a PhD student in Cambridge who has been mentored during some of her time here by CSER’s Julius Weitzdorfer. What is particularly satisfying here, as the OP pointed to, is that while the paper was being finalised, CSER was able to support Natalie and a team of students in pushing through one of the key recommendations of the paper, and establishing an All-Party Parliamentary Group on Future Generations (for which CSER is playing role of secretariat, and CSER researchers/senior advisors playing advisory role).
So I would say a continued stream of papers that do some combination of the following would be a good scenario: (a) opening up new ways of analysing GCRs (b) developing new methodologies for foresight and anticipating risk or risk-relevant advances (c) producing outputs that are useful for institutions with key roles in managing global risks (d) result in implementable recommendations that we can help to implement (e) introduce concepts in contemporary tech, policy and risk that will be useful for future challenges.
These can be well-complemented by high-quality academic papers chipping away at GCR-relevant issues like biodiversity loss, international risk governance, AI foresight and governance, issues of global ethics and future generations etc in a more incremental/fine-grained fashion. Plus targets of opportunity like emerging areas of risk that aren’t quite in anyone’s domain to work on properly at present and are thus going under-treated (e.g. a workshop led by Shahar Avin on emerging risks from modernising infrastructure around nuclear command and control, esp looking at the cyber angle, might fall into this category—paper just submitted). And several papers that will be coming out in 2019 will be focused on analysing the evidence base for different claims around Xrisks/GCRs, as well as ways of collecting and aggregating research relevant to Xrisk/GCR across fields, which we think will be helpful in Xrisk/GCR’s move towards being a ‘mature’ field.
Thanks, Seán! This response was incredibly helpful. Looking forward to reading some of these.