Thank you for this post. I’m usually wary of attempts to establish terminology unless there are clear demonstrations of its usefulness. However, in this case my impression is that public writing on related terms such as ‘global priorities research’ or ‘macrostrategy’ is sufficiently vague or ill-focused that I think this post might contribute to a valuable conversation. I’m not sure if the specific terms you’re using here will catch on, but I’m happy to see its framework clearly spelled out.
A few reactions:
[Epistemic status: I’ve only thought about your post specifically for 1 minute, but about the broader issue of the marginal utility of different types of longtermist-relevant research for something between 1 and 1000 hours depending on how you count. Still, I don’t think I have very crisp arguments or data to back up the following impressions. I think in the following I’m mostly simply stating my view rather than providing reasons to believe it.]
I strongly agree with the following (I also think it would be better if I shared more of my implicit models, though way less useful than for some other researchers):
“We believe most researchers have some implicit models which, when written up, would not meet the standards for academic publication. However, sharing them will allow these models to be built upon and improved by the community. This will also make it easier for outsiders, such as donors and aspiring researchers, to understand the crucial considerations within the field.”
Relative to my own intuitions, I feel like you underestimate the extent to which your “spine” ideally would be a back-and-forth between its different levels rather than (except for informing and improving research) a one-way street. Put differently, my own intuition is that things like “insights down the line might give rise to new questions higher up” and “trying out some tactics research might illuminate some strategic uncertainties” are very important—I would have mentioned them more prominently. In fact, I tend to be quite pessimistic about strategy research that is not in some way informed by tactics research (and informing research) - I think the room of useful ‘tactics- and data-free strategy research’ is very limited, and that EA has actually mostly exhausted it in the area of existential risk reduction. [I know that this last view of mine is particularly controversial among people I epistemically respect.]
I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you’re planning to do strategy research going forward (or what methods you’d recommend to others).
An attempt to state my overall (weakly held) view in your terminology:
There are some easy wins in existential-risk-relevant informing research. Pursuing them often requires domain expertise in an existing field or entrepreneur-style initiative. I would like to see the EA community to both reap these low-hanging fruit, and to more highly value the required kinds of domain expertise and entrepreneurship in order to increase our structural ability to pursue such easy wins.
Examples of easy wins (I’m aware that some of them are already being pursued by people inside and outside of EA—that’s great!): (1) gather more comprehensive data on AI inputs and outputs (e.g. it is symptomatic that both AI Impacts’s page on Funding of AI Research and more well-resourced attempts such as the AI Index arguably are substantially incomplete), (2) create a list of AI capability milestones that can be used for forecasting, (3) a qualitative social science research project that interviews US government officials to get a sense of their understanding of AI, (4) an accessible summary of theories explaining the timing and location of the Industrial Revolution, (5) an accessible summary of what properties of AI seem most relevant for AI’s impact on economic growth, (6) <lots of specific things about China>, … - I think I could generate 10-100 of such easy wins for which it’s true that I’d pay significant amounts of money for acquiring an answer I could be confident and that doesn’t require me to do the research myself [maybe that bar is too low to be interesting].
While I agree that we face substantial strategic uncertainty, I think I’m significantly less optimistic about the marginal tractability of strategy research than you seem to be. (Exceptions about which I’m more optimistic: questions directly tied to tactics or implementation; and strategy research that is largely one of the above ‘easy wins’.) Given the resources that have been invested into strategy research, e.g. at FHI, if the marginal value was high then I would expect to be able to point at more specific valuable outputs of strategy research from the last 5 years. For example, while I tend to be excited about work that, say, immediately helps Open Phil to determine their funding allocation, I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as “how to best allocate resources between reducing various existential risks” in the abstract. To be clear, I think there are some valuable insights that can be found by such research—for example, that anthropogenic existential risk is much higher than natural risk. However, my impression is that the remaining open questions are either very intractable or require intimate familiarity with a specific context (which could, for example, be provided by tactics research or access to information internal to some organization more broadly).
I feel like you overstate the point that “[s]trategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information”. To me, this seems true only for some ways of interacting with your environment. In your example, a way of interacting with the environment that seems safe and like it has a high value of information would be to broadly understand how the government operates without making specific recommendations—e.g. by looking at relevant case studies, working in government, or interviewing government staff.
Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I’m therefore reluctant to call them “research”.
Relative to my own intuitions, I feel like you underestimate the extent to which your “spine” ideally would be a back-and-forth between its different levels
I agree, the “spine” glosses over a lot of the important dynamics.
I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you’re planning to do strategy research going forward (or what methods you’d recommend to others).
Very good points. Both would indeed be highly valuable to the argument. As follow up posts, I’m considering writing up (1) concrete projects in strategy research that seem valuable, and (2) a research agenda.
While I agree that we face substantial strategic uncertainty, I think I’m significantly less optimistic about the marginal tractability of strategy research than you seem to be.
Yeah, we’re more optimistic than you here. I don’t think it’s possible to do useful completely “tactics and data free” strategy research. But I do think there is highly valuable strategy research to do that can be grounded with a smaller amount of tactics and data gathering. What tactics research and data gathering is key? I think this is a strategic question and I think we’re currently just scratching the surface.
For example, while I tend to be excited about work that, say, immediately helps Open Phil to determine their funding allocation, I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as “how to best allocate resources between reducing various existential risks” in the abstract.
I agree that it seems like that could easily be a bad use of time for “external researchers” to do that. I’m somewhat optimistic about these researchers examining sub-questions that would inform how to do the allocation.
Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I’m therefore reluctant to call them “research”.
I think the idea cluster of existential risk reduction was formed through something I’d call “research”. I think, in a certain way, we need more work of this type. But it also needs to be different in some important way in order to create new valuable knowledge. We hope to do work of this nature.
Thank you for your response, David! One quick observation:
I think the idea cluster of existential risk reduction was formed through something I’d call “research”. I think, in a certain way, we need more work of this type.
I agree that the current idea cluster of existential risk reduction was formed through research. However, it seems that one key difference between our views is: you seem to be optimistic that future research of this type (though different in some ways, as you say later) would uncover similarly useful insights, while I tend to think that the space of crucial considerations we can reliably identify with this type of research has been almost exhausted. (NB I think there are many more crucial considerations “out there”, it’s just that I’m skeptical we can find them.)
If this is right, then it seems we actually make different predictions about the future, and you could prove me wrong by delivering valuable strategy research outputs within the next few years.
Hi Max, thank you for your engaging comment and sorry for the slow response! I’ll try to address your point one by one.
Relative to my own intuitions, I feel like you underestimate the extent to which your “spine” ideally would be a back-and-forth between its different levels rather than (except for informing and improving research) a one-way street.
I think we are more in agreement here than it seems (although I suspect we still disagree). We framed the spine more as a one-way process for the sake of clarity, but it’s definitely very iterative where much feedback is needed from lower levels of informing research! Still, I believe there is a lot of strategy research to be done—perhaps especially for questions that are not attractive for academic papers, such as which actions and institutions are needed for reducing x-risk.
I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you’re planning to do strategy research going forward (or what methods you’d recommend to others).
I’m going to leave this question to David and Justin, since my collaboration with Convergence was only temporary and they are much better suited to talk about their research plans than I am.
Examples of easy wins
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you’re most aware of those?
I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as “how to best allocate resources between reducing various existential risks” in the abstract.
This is almost exactly the research question I will be looking at for my next project! (To be done at CSER as a summer research intern) I hope I can convince you once the research is done, or already with my research proposal ;)
I feel like you overstate the point that “[s]trategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information”. To me, this seems true only for some ways of interacting with your environment. In your example, a way of interacting with the environment that seems safe and like it has a high value of information would be to broadly understand how the government operates without making specific recommendations—e.g. by looking at relevant case studies, working in government, or interviewing government staff.
I agree with you here. We used the term ‘interacting’ while we should have used ‘affecting’ or ‘changing’. Simply interacting—being part of a system and/or observing it from the inside—can be very valuable and doesn’t seem very risky if one doesn’t try to make big changes. However, trying to affect/change the environment without sufficient strategic understanding could be very harmful.
Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I’m therefore reluctant to call them “research”.
My sense is that the best company strategies are informed by a host of strategy research and informing research from a group of employees and consultants. The discussions are of course enormously useful, but they give rise to questions that should be answered by research. In addition, I expect companies’ strategies to be much better tuned to their goals than x-risk oriented organizations: companies have a very clear feedback mechanism (profit) that we lack.
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you’re most aware of those?
My guess is that AI examples were most salient to me because AI has been the area I’ve thought about the most recently. I strongly suspect there are easy wins in other areas as well.
Thank you for this post. I’m usually wary of attempts to establish terminology unless there are clear demonstrations of its usefulness. However, in this case my impression is that public writing on related terms such as ‘global priorities research’ or ‘macrostrategy’ is sufficiently vague or ill-focused that I think this post might contribute to a valuable conversation. I’m not sure if the specific terms you’re using here will catch on, but I’m happy to see its framework clearly spelled out.
A few reactions:
[Epistemic status: I’ve only thought about your post specifically for 1 minute, but about the broader issue of the marginal utility of different types of longtermist-relevant research for something between 1 and 1000 hours depending on how you count. Still, I don’t think I have very crisp arguments or data to back up the following impressions. I think in the following I’m mostly simply stating my view rather than providing reasons to believe it.]
I strongly agree with the following (I also think it would be better if I shared more of my implicit models, though way less useful than for some other researchers):
Relative to my own intuitions, I feel like you underestimate the extent to which your “spine” ideally would be a back-and-forth between its different levels rather than (except for informing and improving research) a one-way street. Put differently, my own intuition is that things like “insights down the line might give rise to new questions higher up” and “trying out some tactics research might illuminate some strategic uncertainties” are very important—I would have mentioned them more prominently. In fact, I tend to be quite pessimistic about strategy research that is not in some way informed by tactics research (and informing research) - I think the room of useful ‘tactics- and data-free strategy research’ is very limited, and that EA has actually mostly exhausted it in the area of existential risk reduction. [I know that this last view of mine is particularly controversial among people I epistemically respect.]
I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you’re planning to do strategy research going forward (or what methods you’d recommend to others).
An attempt to state my overall (weakly held) view in your terminology:
There are some easy wins in existential-risk-relevant informing research. Pursuing them often requires domain expertise in an existing field or entrepreneur-style initiative. I would like to see the EA community to both reap these low-hanging fruit, and to more highly value the required kinds of domain expertise and entrepreneurship in order to increase our structural ability to pursue such easy wins.
Examples of easy wins (I’m aware that some of them are already being pursued by people inside and outside of EA—that’s great!): (1) gather more comprehensive data on AI inputs and outputs (e.g. it is symptomatic that both AI Impacts’s page on Funding of AI Research and more well-resourced attempts such as the AI Index arguably are substantially incomplete), (2) create a list of AI capability milestones that can be used for forecasting, (3) a qualitative social science research project that interviews US government officials to get a sense of their understanding of AI, (4) an accessible summary of theories explaining the timing and location of the Industrial Revolution, (5) an accessible summary of what properties of AI seem most relevant for AI’s impact on economic growth, (6) <lots of specific things about China>, … - I think I could generate 10-100 of such easy wins for which it’s true that I’d pay significant amounts of money for acquiring an answer I could be confident and that doesn’t require me to do the research myself [maybe that bar is too low to be interesting].
While I agree that we face substantial strategic uncertainty, I think I’m significantly less optimistic about the marginal tractability of strategy research than you seem to be. (Exceptions about which I’m more optimistic: questions directly tied to tactics or implementation; and strategy research that is largely one of the above ‘easy wins’.) Given the resources that have been invested into strategy research, e.g. at FHI, if the marginal value was high then I would expect to be able to point at more specific valuable outputs of strategy research from the last 5 years. For example, while I tend to be excited about work that, say, immediately helps Open Phil to determine their funding allocation, I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as “how to best allocate resources between reducing various existential risks” in the abstract. To be clear, I think there are some valuable insights that can be found by such research—for example, that anthropogenic existential risk is much higher than natural risk. However, my impression is that the remaining open questions are either very intractable or require intimate familiarity with a specific context (which could, for example, be provided by tactics research or access to information internal to some organization more broadly).
I feel like you overstate the point that “[s]trategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information”. To me, this seems true only for some ways of interacting with your environment. In your example, a way of interacting with the environment that seems safe and like it has a high value of information would be to broadly understand how the government operates without making specific recommendations—e.g. by looking at relevant case studies, working in government, or interviewing government staff.
Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I’m therefore reluctant to call them “research”.
Thanks for your detailed comment, Max!
I agree, the “spine” glosses over a lot of the important dynamics.
Very good points. Both would indeed be highly valuable to the argument. As follow up posts, I’m considering writing up (1) concrete projects in strategy research that seem valuable, and (2) a research agenda.
Yeah, we’re more optimistic than you here. I don’t think it’s possible to do useful completely “tactics and data free” strategy research. But I do think there is highly valuable strategy research to do that can be grounded with a smaller amount of tactics and data gathering.
What tactics research and data gathering is key? I think this is a strategic question and I think we’re currently just scratching the surface.
I agree that it seems like that could easily be a bad use of time for “external researchers” to do that. I’m somewhat optimistic about these researchers examining sub-questions that would inform how to do the allocation.
I think the idea cluster of existential risk reduction was formed through something I’d call “research”. I think, in a certain way, we need more work of this type. But it also needs to be different in some important way in order to create new valuable knowledge. We hope to do work of this nature.
Thank you for your response, David! One quick observation:
I agree that the current idea cluster of existential risk reduction was formed through research. However, it seems that one key difference between our views is: you seem to be optimistic that future research of this type (though different in some ways, as you say later) would uncover similarly useful insights, while I tend to think that the space of crucial considerations we can reliably identify with this type of research has been almost exhausted. (NB I think there are many more crucial considerations “out there”, it’s just that I’m skeptical we can find them.)
If this is right, then it seems we actually make different predictions about the future, and you could prove me wrong by delivering valuable strategy research outputs within the next few years.
Indeed! We hope we can deliver that sooner rather than later. Though foundational research may need time to properly come to fruition.
Hi Max, thank you for your engaging comment and sorry for the slow response! I’ll try to address your point one by one.
I think we are more in agreement here than it seems (although I suspect we still disagree). We framed the spine more as a one-way process for the sake of clarity, but it’s definitely very iterative where much feedback is needed from lower levels of informing research! Still, I believe there is a lot of strategy research to be done—perhaps especially for questions that are not attractive for academic papers, such as which actions and institutions are needed for reducing x-risk.
I’m going to leave this question to David and Justin, since my collaboration with Convergence was only temporary and they are much better suited to talk about their research plans than I am.
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you’re most aware of those?
This is almost exactly the research question I will be looking at for my next project! (To be done at CSER as a summer research intern) I hope I can convince you once the research is done, or already with my research proposal ;)
I agree with you here. We used the term ‘interacting’ while we should have used ‘affecting’ or ‘changing’. Simply interacting—being part of a system and/or observing it from the inside—can be very valuable and doesn’t seem very risky if one doesn’t try to make big changes. However, trying to affect/change the environment without sufficient strategic understanding could be very harmful.
My sense is that the best company strategies are informed by a host of strategy research and informing research from a group of employees and consultants. The discussions are of course enormously useful, but they give rise to questions that should be answered by research. In addition, I expect companies’ strategies to be much better tuned to their goals than x-risk oriented organizations: companies have a very clear feedback mechanism (profit) that we lack.
My guess is that AI examples were most salient to me because AI has been the area I’ve thought about the most recently. I strongly suspect there are easy wins in other areas as well.