I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I’ve read and I predict that other people who have thought about global catastrophic risks for a while would feel the same.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.
The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).
I would be quite interested in further thoughts you have on this. I’ve actually found that the central ideas of the far future argument paper have held up quite well, possibly even better than I had originally expected. Ditto for the primary follow-up to this paper, “Reconciliation between factions focused on near-term and long-term artificial intelligence”, which is a deeper dive on this theme in the context of AI. Some examples of work that is in this spirit:
· Open Philanthropy Project’s grant for the new Georgetown CSET group, which pursues “opportunities to inform current and future policies that could affect long-term outcomes” (link)
All of these are more recent than the GCRI papers, though I don’t actually know how influential GCRI’s work was in any of the above. The Cave and ÓhÉigeartaigh paper is the only one that cites our work, and I know that some other people have independently reached the same conclusion about synergies between near-term and long-term AI. Even if GCRI’s work was not causative in these cases, these data points show that the underlying ideas have wider currency, and that GCRI may have been (probably was?) ahead of the curve.
One kind of bad operationalization might be “research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space”.
That’s fine, but note that those organizations have much larger budgets than GCRI. Of them, GCRI has closest ties to FHI. Indeed, two FHI researchers were co-authors on the long-term trajectories paper. Also, if GCRI was to be funded specifically for research to improve the decision-making of people at those organizations, then we would invest more in interacting with them, learning what they don’t know / are getting wrong, and focusing our work accordingly. I would be open to considering such funding, but that is not what we have been funded for, so our existing body of work may be oriented in an at least somewhat different direction.
It may also be worth noting that the long-term trajectories paper functioned as more of a consensus paper, and so I had to be more restrained with respect to bolder and more controversial claims. To me, the paper’s primary contributions are in showing broad consensus for the topic, integrating the many co-author’s perspectives into one narrative, breaking ground especially in the empirical analysis of long-term trajectories, and providing entry points for a wider range of researchers to contribute to the topic. Most of the existing literature is primarily theoretical/philosophical, but the empirical details are very important. (The paper also played a professional development role for me in that it gave me experience leading a massively-multi-authored paper.)
Given the consensus format of the paper, I was intrigued that the co-author group was able to support the (admittedly toned down) punch-line in the conclusion “contrary to some claims in the catastrophic risk literature, extinction risks may not be categorically more important than large subextinction risks”. A bolder/more controversial idea that I have a lot of affinity for is that the common emphasis on extinction risk is wrong, and that a wider—potentially much wider—set of risks merits comparable concern. Related to this is the idea that “existential risk” is either bad terminology or not the right thing to prioritize. I have not yet had the chance to develop these ideas exactly as I see them (largely due to lack of funding for it), but the long-term trajectories paper does cover a lot of the relevant ground.
(I have also not had the chance to do much to engage the wider range of researchers who could contribute to the topic, again due to lack of funding for it. These would mainly be researchers with expertise on important empirical details. That sort of follow-up is a thing that funding often goes toward, but we didn’t even have dedicated funding for the original paper, so we’ve instead focused on other work.)
Overall, the response to the long-term trajectories paper has been quite positive. Some public examples:
· The 2018 AI Alignment Literature Review and Charity Comparison, which wrote: “The scope is very broad but the analysis is still quite detailed; it reminds me of Superintelligence a bit. I think this paper has a strong claim to becoming the default reference for the topic.”
· A BBC article on the long-term future, which calls the paper “intriguing and readable” and then describes it in detail. The BBC also invited me to contribute an article on the topic for them, which turned into this.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts.
Just wanted to make a quick note that I also felt the “overview” style posts aren’t very useful to me (since they mostly encapsulate things I already had thought about)
At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.
I would be quite interested in further thoughts you have on this. I’ve actually found that the central ideas of the far future argument paper have held up quite well, possibly even better than I had originally expected. Ditto for the primary follow-up to this paper, “Reconciliation between factions focused on near-term and long-term artificial intelligence”, which is a deeper dive on this theme in the context of AI. Some examples of work that is in this spirit:
· Open Philanthropy Project’s grant for the new Georgetown CSET group, which pursues “opportunities to inform current and future policies that could affect long-term outcomes” (link)
· The study The Malicious Use of Artificial Intelligence, which, despite being led by FHI and CSER, is focused on near-term and sub-existential risks from AI
· The paper Bridging near- and long-term concerns about AI by Stephen Cave and Seán S. ÓhÉigeartaigh of CSER/CFI
All of these are more recent than the GCRI papers, though I don’t actually know how influential GCRI’s work was in any of the above. The Cave and ÓhÉigeartaigh paper is the only one that cites our work, and I know that some other people have independently reached the same conclusion about synergies between near-term and long-term AI. Even if GCRI’s work was not causative in these cases, these data points show that the underlying ideas have wider currency, and that GCRI may have been (probably was?) ahead of the curve.
That’s fine, but note that those organizations have much larger budgets than GCRI. Of them, GCRI has closest ties to FHI. Indeed, two FHI researchers were co-authors on the long-term trajectories paper. Also, if GCRI was to be funded specifically for research to improve the decision-making of people at those organizations, then we would invest more in interacting with them, learning what they don’t know / are getting wrong, and focusing our work accordingly. I would be open to considering such funding, but that is not what we have been funded for, so our existing body of work may be oriented in an at least somewhat different direction.
It may also be worth noting that the long-term trajectories paper functioned as more of a consensus paper, and so I had to be more restrained with respect to bolder and more controversial claims. To me, the paper’s primary contributions are in showing broad consensus for the topic, integrating the many co-author’s perspectives into one narrative, breaking ground especially in the empirical analysis of long-term trajectories, and providing entry points for a wider range of researchers to contribute to the topic. Most of the existing literature is primarily theoretical/philosophical, but the empirical details are very important. (The paper also played a professional development role for me in that it gave me experience leading a massively-multi-authored paper.)
Given the consensus format of the paper, I was intrigued that the co-author group was able to support the (admittedly toned down) punch-line in the conclusion “contrary to some claims in the catastrophic risk literature, extinction risks may not be categorically more important than large subextinction risks”. A bolder/more controversial idea that I have a lot of affinity for is that the common emphasis on extinction risk is wrong, and that a wider—potentially much wider—set of risks merits comparable concern. Related to this is the idea that “existential risk” is either bad terminology or not the right thing to prioritize. I have not yet had the chance to develop these ideas exactly as I see them (largely due to lack of funding for it), but the long-term trajectories paper does cover a lot of the relevant ground.
(I have also not had the chance to do much to engage the wider range of researchers who could contribute to the topic, again due to lack of funding for it. These would mainly be researchers with expertise on important empirical details. That sort of follow-up is a thing that funding often goes toward, but we didn’t even have dedicated funding for the original paper, so we’ve instead focused on other work.)
Overall, the response to the long-term trajectories paper has been quite positive. Some public examples:
· The 2018 AI Alignment Literature Review and Charity Comparison, which wrote: “The scope is very broad but the analysis is still quite detailed; it reminds me of Superintelligence a bit. I think this paper has a strong claim to becoming the default reference for the topic.”
· A BBC article on the long-term future, which calls the paper “intriguing and readable” and then describes it in detail. The BBC also invited me to contribute an article on the topic for them, which turned into this.
Just wanted to make a quick note that I also felt the “overview” style posts aren’t very useful to me (since they mostly encapsulate things I already had thought about)
At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.