This is the (very slightly edited) feedback that I sent to GCRI based on their application (caveat that GCR-policy is not my expertise and I only had relatively weak opinions in the discussion around this grant, so this should definitely not be seen as representative of the broader opinion of the fund):
I was actually quite positive on this grant, so the primary commentary I can provide is a summary of what would have been sufficient to move me to be very excited about the grant.
Overall, I have to say that I was quite positively surprised after reading a bunch of GCRI’s papers, which I had not done before (in particular the paper that lists and analyzes all the nuclear weapon close-calls).
I think the biggest thing that made me hesitant about strongly recommending GCRI, is that I don’t have a great model of who GCRI is trying to reach. I am broadly not super excited about reaching out to policy makers at this stage of the GCR community’s strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there. I do have some models of capacity-building that suggest some concrete actions, but those have more to do with building functional research institutions that are focused on recruiting top-level talent to think more about problems related to the long term future.
I noticed that while I ended up being quite positively surprised by the GCRI papers, I hadn’t read any of them up to that point, and neither had any of the other fund members. This made me think that we are likely not the target audience of those papers. And while I did find them useful, I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.
I think the key thing that I would need to be very excited about GCRI is to understand and be excited by target group that GCRI is trying to communicate to. My current model suggests that GCRI is primarily trying to reach existing policy makers, which seems unlikely to contribute to furthering the conceptual progress around global catastrophic risks much.
Seth wrote a great response that I think he is open to posting to the forum.
Oliver Habryka’s comments raise some important issues, concerns, and ideas for future directions. I elaborate on these below. First, I would like to express my appreciation for his writing these comments and making them available for public discussion. Doing this on top of the reviews themselves strikes me as quite a lot of work, but also very valuable for advancing grant-making and activity on the long-term future.
My understanding of Oliver’s comments is that while he found GCRI’s research to be of a high intellectual quality, he did not have confidence that the research is having sufficient positive impact. There seem to be four issues at play: GCRI’s audience, the value of policy outreach on global catastrophic risk (GCR), the review of proposals on unfamiliar topics, and the extent to which GCRI’s research addresses fundamental issues in GCR.
(1) GCRI’s audience
I would certainly agree that it is important for research to have a positive impact on the issues at hand and not just be an intellectual exercise. To have an impact, it needs an audience.
Oliver’s stated impression is that GCRI’s audience is primarily policy makers, and not the EA long-term future (EA-LTF) community or global catastrophic risk (GCR) experts. I would agree that GCRI’s audience includes policy makers, but I would disagree that our audience does not include the EA-LTF community or GCR experts. I would add that our audience also includes scholars who work on topics adjacent to GCR and can make important contributions to GCR, as well as people in other relevant sectors, e.g. private companies working on AI. We try to prioritize our outreach to these audiences based on what will have the most positive impact on reducing GCR given our (unfortunately rather limited) resources and our need to also make progress on the research we are funded for. We very much welcome suggestions on how we can do this better.
The GCRI paper that Oliver described (“the paper that lists and analyzes all the nuclear weapon close-calls” is A Model for the Probability of Nuclear War. This paper is indeed framed for policy audiences, which was in part due to the specifications of the sponsor of this work (the Global Challenges Foundation) and in part because the policy audience is the most important audience for work on nuclear weapons. It is easy to see how reading that paper could suggest that policy makers are GCRI’s primary audience. Nonetheless, we did manage to embed some EA themes into the paper, such as the question of how much nuclear war should be prioritized relative to other issues. This is an example of us trying to stretch our limited resources in directions of relevance to wider audiences including EA.
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.
(2) The value of policy outreach
Oliver writes, “I am broadly not super excited about reaching out to policy makers at this stage of the GCR community’s strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there.”
This is consistent with comments I’ve heard expressed by other people in the EA-LTF-GCR community, and some colleagues report hearing things like this too. The general trend has been that people within this community who are not active in policy outreach are much less comfortable with it than those who are. This makes sense, but it also is a problem that holds us back from having a larger positive impact on policy. This includes GCRI’s funding and the work that the funding supports, but it is definitely bigger than GCRI.
This is not the space for a lengthy discussion of policy outreach. For now, it suffices to note that there is considerable policy expertise within the EA-LTF-GCR community, including at GCRI and several other organizations. There are some legitimately tricky policy outreach issues, such as in drawing attention to certain aspects of risky technologies. Those of us who are active in policy outreach are very attentive to these issues. A lot of the outreach is more straightforward, and a nontrivial portion is actually rather mundane. Improving awareness about policy outreach within the EA-LTF-GCR community should be an ongoing project.
It is also worth distinguishing between policy outreach and policy research. Much of GCRI’s policy-oriented work is the latter. The research can and often does inform the outreach. Where there is uncertainty about what policy outreach to do, policy research is an appropriate investment. While I’m not quite sure what is meant by “this stage of the GCR community’s strategic understanding”, there’s a good chance that this understanding could be improved by research by groups like GCRI, if we were funded to do so.
(3) Reviewing proposals on unfamiliar topics
We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI’s work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.
This makes me wonder if the Long-Term Future Fund may benefit from a more decentralized review process, possibly including some form of peer review. It seems like an enormous burden for the fund’s team to have to know all the nuances of all the projects and issue areas that they could be funding. I certainly would not want to do all that on my own. It is common for funding proposal evaluation to include peer review, especially in the sciences. Perhaps that could be a way for the fund’s team to lighten its load while bringing in a wider mix of perspectives and expertise. I know I would volunteer to review some proposals, and I’m confident at least some of my colleagues would too.
It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.
(4) GCRI’s research on fundamental issues in GCR
As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.
I will note that GCRI has always wanted to focus primarily on the big cross-cutting GCR issues, but we have never gotten significant funding for it. Instead, our funding has gone almost exclusively to more narrow work on specific risks. That is important work too, and we are grateful for the funding, but I think a case can be made for more support for cross-cutting work on the big issues. We still find ways to do some work on the big issues, but our funding reality prevents us from doing much.
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.
I want to make sure that there isn’t any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund.
I don’t currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.
On the question of whether we should have an iterative process: I do view this publishing of the LTF-responses as part of an iterative process. Given that we are planning to review applications every few months, you responding to what I wrote allows us to update on your responses for next round, which will be relatively soon.
(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to)
As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.
Of those, I had read “Long-term trajectories of human civilization” and “The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives” before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation).
I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I’ve read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum).
I also generally found the arguments in them not particularly compelling (in particular I found the arguments in “The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives” relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).
I highlighted the “A model for the probability of nuclear war” not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don’t think that applies to any of the three papers you linked above.
I don’t currently have a great operationalization of what I mean by “fundamental confusions around global catastrophic risks” so I am sorry for not being able to be more clear on this. One kind of bad operationalization might be “research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space”. It seems plausible to me that you are currently aiming to write some papers with a goal like this in mind, but I don’t think most of GCRI’s papers achieve that. The “A model for the probability of nuclear war” did feel like a paper that might actually achieve that, though from what you said it might have not actually have had that goal.
I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I’ve read and I predict that other people who have thought about global catastrophic risks for a while would feel the same.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.
The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).
I would be quite interested in further thoughts you have on this. I’ve actually found that the central ideas of the far future argument paper have held up quite well, possibly even better than I had originally expected. Ditto for the primary follow-up to this paper, “Reconciliation between factions focused on near-term and long-term artificial intelligence”, which is a deeper dive on this theme in the context of AI. Some examples of work that is in this spirit:
· Open Philanthropy Project’s grant for the new Georgetown CSET group, which pursues “opportunities to inform current and future policies that could affect long-term outcomes” (link)
All of these are more recent than the GCRI papers, though I don’t actually know how influential GCRI’s work was in any of the above. The Cave and ÓhÉigeartaigh paper is the only one that cites our work, and I know that some other people have independently reached the same conclusion about synergies between near-term and long-term AI. Even if GCRI’s work was not causative in these cases, these data points show that the underlying ideas have wider currency, and that GCRI may have been (probably was?) ahead of the curve.
One kind of bad operationalization might be “research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space”.
That’s fine, but note that those organizations have much larger budgets than GCRI. Of them, GCRI has closest ties to FHI. Indeed, two FHI researchers were co-authors on the long-term trajectories paper. Also, if GCRI was to be funded specifically for research to improve the decision-making of people at those organizations, then we would invest more in interacting with them, learning what they don’t know / are getting wrong, and focusing our work accordingly. I would be open to considering such funding, but that is not what we have been funded for, so our existing body of work may be oriented in an at least somewhat different direction.
It may also be worth noting that the long-term trajectories paper functioned as more of a consensus paper, and so I had to be more restrained with respect to bolder and more controversial claims. To me, the paper’s primary contributions are in showing broad consensus for the topic, integrating the many co-author’s perspectives into one narrative, breaking ground especially in the empirical analysis of long-term trajectories, and providing entry points for a wider range of researchers to contribute to the topic. Most of the existing literature is primarily theoretical/philosophical, but the empirical details are very important. (The paper also played a professional development role for me in that it gave me experience leading a massively-multi-authored paper.)
Given the consensus format of the paper, I was intrigued that the co-author group was able to support the (admittedly toned down) punch-line in the conclusion “contrary to some claims in the catastrophic risk literature, extinction risks may not be categorically more important than large subextinction risks”. A bolder/more controversial idea that I have a lot of affinity for is that the common emphasis on extinction risk is wrong, and that a wider—potentially much wider—set of risks merits comparable concern. Related to this is the idea that “existential risk” is either bad terminology or not the right thing to prioritize. I have not yet had the chance to develop these ideas exactly as I see them (largely due to lack of funding for it), but the long-term trajectories paper does cover a lot of the relevant ground.
(I have also not had the chance to do much to engage the wider range of researchers who could contribute to the topic, again due to lack of funding for it. These would mainly be researchers with expertise on important empirical details. That sort of follow-up is a thing that funding often goes toward, but we didn’t even have dedicated funding for the original paper, so we’ve instead focused on other work.)
Overall, the response to the long-term trajectories paper has been quite positive. Some public examples:
· The 2018 AI Alignment Literature Review and Charity Comparison, which wrote: “The scope is very broad but the analysis is still quite detailed; it reminds me of Superintelligence a bit. I think this paper has a strong claim to becoming the default reference for the topic.”
· A BBC article on the long-term future, which calls the paper “intriguing and readable” and then describes it in detail. The BBC also invited me to contribute an article on the topic for them, which turned into this.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts.
Just wanted to make a quick note that I also felt the “overview” style posts aren’t very useful to me (since they mostly encapsulate things I already had thought about)
At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.
Thanks for posting the response! Some short clarifications:
We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI’s work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.
My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn’t update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable).
It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.
I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.
I’ll briefly note that I am currently working on a more extended discussion of policy outreach suitable for posting online, possibly on this site, that is oriented toward improving the understanding of people in the EA-LTF-GCR community. It’s not certain I’ll have the chance to complete given my other responsibilities it but hopefully I will.
Also if it would help I can provide suggestions of people at other organizations who can give perspectives on various aspects of GCRI’s work. We could follow up privately about that.
This is the (very slightly edited) feedback that I sent to GCRI based on their application (caveat that GCR-policy is not my expertise and I only had relatively weak opinions in the discussion around this grant, so this should definitely not be seen as representative of the broader opinion of the fund):
Seth wrote a great response that I think he is open to posting to the forum.
Oliver Habryka’s comments raise some important issues, concerns, and ideas for future directions. I elaborate on these below. First, I would like to express my appreciation for his writing these comments and making them available for public discussion. Doing this on top of the reviews themselves strikes me as quite a lot of work, but also very valuable for advancing grant-making and activity on the long-term future.
My understanding of Oliver’s comments is that while he found GCRI’s research to be of a high intellectual quality, he did not have confidence that the research is having sufficient positive impact. There seem to be four issues at play: GCRI’s audience, the value of policy outreach on global catastrophic risk (GCR), the review of proposals on unfamiliar topics, and the extent to which GCRI’s research addresses fundamental issues in GCR.
(1) GCRI’s audience
I would certainly agree that it is important for research to have a positive impact on the issues at hand and not just be an intellectual exercise. To have an impact, it needs an audience.
Oliver’s stated impression is that GCRI’s audience is primarily policy makers, and not the EA long-term future (EA-LTF) community or global catastrophic risk (GCR) experts. I would agree that GCRI’s audience includes policy makers, but I would disagree that our audience does not include the EA-LTF community or GCR experts. I would add that our audience also includes scholars who work on topics adjacent to GCR and can make important contributions to GCR, as well as people in other relevant sectors, e.g. private companies working on AI. We try to prioritize our outreach to these audiences based on what will have the most positive impact on reducing GCR given our (unfortunately rather limited) resources and our need to also make progress on the research we are funded for. We very much welcome suggestions on how we can do this better.
The GCRI paper that Oliver described (“the paper that lists and analyzes all the nuclear weapon close-calls” is A Model for the Probability of Nuclear War. This paper is indeed framed for policy audiences, which was in part due to the specifications of the sponsor of this work (the Global Challenges Foundation) and in part because the policy audience is the most important audience for work on nuclear weapons. It is easy to see how reading that paper could suggest that policy makers are GCRI’s primary audience. Nonetheless, we did manage to embed some EA themes into the paper, such as the question of how much nuclear war should be prioritized relative to other issues. This is an example of us trying to stretch our limited resources in directions of relevance to wider audiences including EA.
Some other examples: Long-term trajectories of human civilization was largely written for audiences of EA-LTF, GCR experts, and scholars of adjacent topics. Global Catastrophes: The Most Extreme Risks was largely written for the professional risk analysis community. Reconciliation between factions focused on near-term and long-term artificial intelligence was largely written for… well, the title speaks for itself, and is a good example of GCRI engaging across multiple audiences.
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.
(2) The value of policy outreach
Oliver writes, “I am broadly not super excited about reaching out to policy makers at this stage of the GCR community’s strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there.”
This is consistent with comments I’ve heard expressed by other people in the EA-LTF-GCR community, and some colleagues report hearing things like this too. The general trend has been that people within this community who are not active in policy outreach are much less comfortable with it than those who are. This makes sense, but it also is a problem that holds us back from having a larger positive impact on policy. This includes GCRI’s funding and the work that the funding supports, but it is definitely bigger than GCRI.
This is not the space for a lengthy discussion of policy outreach. For now, it suffices to note that there is considerable policy expertise within the EA-LTF-GCR community, including at GCRI and several other organizations. There are some legitimately tricky policy outreach issues, such as in drawing attention to certain aspects of risky technologies. Those of us who are active in policy outreach are very attentive to these issues. A lot of the outreach is more straightforward, and a nontrivial portion is actually rather mundane. Improving awareness about policy outreach within the EA-LTF-GCR community should be an ongoing project.
It is also worth distinguishing between policy outreach and policy research. Much of GCRI’s policy-oriented work is the latter. The research can and often does inform the outreach. Where there is uncertainty about what policy outreach to do, policy research is an appropriate investment. While I’m not quite sure what is meant by “this stage of the GCR community’s strategic understanding”, there’s a good chance that this understanding could be improved by research by groups like GCRI, if we were funded to do so.
(3) Reviewing proposals on unfamiliar topics
We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI’s work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.
This makes me wonder if the Long-Term Future Fund may benefit from a more decentralized review process, possibly including some form of peer review. It seems like an enormous burden for the fund’s team to have to know all the nuances of all the projects and issue areas that they could be funding. I certainly would not want to do all that on my own. It is common for funding proposal evaluation to include peer review, especially in the sciences. Perhaps that could be a way for the fund’s team to lighten its load while bringing in a wider mix of perspectives and expertise. I know I would volunteer to review some proposals, and I’m confident at least some of my colleagues would too.
It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.
(4) GCRI’s research on fundamental issues in GCR
As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
* Long-term trajectories of human civilization is on (among other things) the relative importance of extinction vs. sub-extinction risks.
* The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives is on strategy for how to reduce GCR in a world that is mostly not dedicated to reducing GCR.
* Towards an integrated assessment of global catastrophic risk outlines an agenda for identifying and evaluating the best ways of reducing the entirety of global catastrophic risk.
See also our pages on Cross-Risk Evaluation & Prioritization, Solutions & Strategy, and perhaps also Risk & Decision Analysis.
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.
I will note that GCRI has always wanted to focus primarily on the big cross-cutting GCR issues, but we have never gotten significant funding for it. Instead, our funding has gone almost exclusively to more narrow work on specific risks. That is important work too, and we are grateful for the funding, but I think a case can be made for more support for cross-cutting work on the big issues. We still find ways to do some work on the big issues, but our funding reality prevents us from doing much.
I want to make sure that there isn’t any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund.
I don’t currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.
On the question of whether we should have an iterative process: I do view this publishing of the LTF-responses as part of an iterative process. Given that we are planning to review applications every few months, you responding to what I wrote allows us to update on your responses for next round, which will be relatively soon.
That makes sense. I might suggest making this clear to other applicants. It was not obvious to me.
Thanks, this is good to know.
(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to)
Of those, I had read “Long-term trajectories of human civilization” and “The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives” before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation).
I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I’ve read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum).
I also generally found the arguments in them not particularly compelling (in particular I found the arguments in “The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives” relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).
I highlighted the “A model for the probability of nuclear war” not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don’t think that applies to any of the three papers you linked above.
I don’t currently have a great operationalization of what I mean by “fundamental confusions around global catastrophic risks” so I am sorry for not being able to be more clear on this. One kind of bad operationalization might be “research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space”. It seems plausible to me that you are currently aiming to write some papers with a goal like this in mind, but I don’t think most of GCRI’s papers achieve that. The “A model for the probability of nuclear war” did feel like a paper that might actually achieve that, though from what you said it might have not actually have had that goal.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.
I would be quite interested in further thoughts you have on this. I’ve actually found that the central ideas of the far future argument paper have held up quite well, possibly even better than I had originally expected. Ditto for the primary follow-up to this paper, “Reconciliation between factions focused on near-term and long-term artificial intelligence”, which is a deeper dive on this theme in the context of AI. Some examples of work that is in this spirit:
· Open Philanthropy Project’s grant for the new Georgetown CSET group, which pursues “opportunities to inform current and future policies that could affect long-term outcomes” (link)
· The study The Malicious Use of Artificial Intelligence, which, despite being led by FHI and CSER, is focused on near-term and sub-existential risks from AI
· The paper Bridging near- and long-term concerns about AI by Stephen Cave and Seán S. ÓhÉigeartaigh of CSER/CFI
All of these are more recent than the GCRI papers, though I don’t actually know how influential GCRI’s work was in any of the above. The Cave and ÓhÉigeartaigh paper is the only one that cites our work, and I know that some other people have independently reached the same conclusion about synergies between near-term and long-term AI. Even if GCRI’s work was not causative in these cases, these data points show that the underlying ideas have wider currency, and that GCRI may have been (probably was?) ahead of the curve.
That’s fine, but note that those organizations have much larger budgets than GCRI. Of them, GCRI has closest ties to FHI. Indeed, two FHI researchers were co-authors on the long-term trajectories paper. Also, if GCRI was to be funded specifically for research to improve the decision-making of people at those organizations, then we would invest more in interacting with them, learning what they don’t know / are getting wrong, and focusing our work accordingly. I would be open to considering such funding, but that is not what we have been funded for, so our existing body of work may be oriented in an at least somewhat different direction.
It may also be worth noting that the long-term trajectories paper functioned as more of a consensus paper, and so I had to be more restrained with respect to bolder and more controversial claims. To me, the paper’s primary contributions are in showing broad consensus for the topic, integrating the many co-author’s perspectives into one narrative, breaking ground especially in the empirical analysis of long-term trajectories, and providing entry points for a wider range of researchers to contribute to the topic. Most of the existing literature is primarily theoretical/philosophical, but the empirical details are very important. (The paper also played a professional development role for me in that it gave me experience leading a massively-multi-authored paper.)
Given the consensus format of the paper, I was intrigued that the co-author group was able to support the (admittedly toned down) punch-line in the conclusion “contrary to some claims in the catastrophic risk literature, extinction risks may not be categorically more important than large subextinction risks”. A bolder/more controversial idea that I have a lot of affinity for is that the common emphasis on extinction risk is wrong, and that a wider—potentially much wider—set of risks merits comparable concern. Related to this is the idea that “existential risk” is either bad terminology or not the right thing to prioritize. I have not yet had the chance to develop these ideas exactly as I see them (largely due to lack of funding for it), but the long-term trajectories paper does cover a lot of the relevant ground.
(I have also not had the chance to do much to engage the wider range of researchers who could contribute to the topic, again due to lack of funding for it. These would mainly be researchers with expertise on important empirical details. That sort of follow-up is a thing that funding often goes toward, but we didn’t even have dedicated funding for the original paper, so we’ve instead focused on other work.)
Overall, the response to the long-term trajectories paper has been quite positive. Some public examples:
· The 2018 AI Alignment Literature Review and Charity Comparison, which wrote: “The scope is very broad but the analysis is still quite detailed; it reminds me of Superintelligence a bit. I think this paper has a strong claim to becoming the default reference for the topic.”
· A BBC article on the long-term future, which calls the paper “intriguing and readable” and then describes it in detail. The BBC also invited me to contribute an article on the topic for them, which turned into this.
Just wanted to make a quick note that I also felt the “overview” style posts aren’t very useful to me (since they mostly encapsulate things I already had thought about)
At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.
Thanks for posting the response! Some short clarifications:
My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn’t update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable).
I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.
All good to know, thanks.
I’ll briefly note that I am currently working on a more extended discussion of policy outreach suitable for posting online, possibly on this site, that is oriented toward improving the understanding of people in the EA-LTF-GCR community. It’s not certain I’ll have the chance to complete given my other responsibilities it but hopefully I will.
Also if it would help I can provide suggestions of people at other organizations who can give perspectives on various aspects of GCRI’s work. We could follow up privately about that.