Hi Max, thank you for your engaging comment and sorry for the slow response! I’ll try to address your point one by one.
Relative to my own intuitions, I feel like you underestimate the extent to which your “spine” ideally would be a back-and-forth between its different levels rather than (except for informing and improving research) a one-way street.
I think we are more in agreement here than it seems (although I suspect we still disagree). We framed the spine more as a one-way process for the sake of clarity, but it’s definitely very iterative where much feedback is needed from lower levels of informing research! Still, I believe there is a lot of strategy research to be done—perhaps especially for questions that are not attractive for academic papers, such as which actions and institutions are needed for reducing x-risk.
I think I would find it easier to understand to what extent I agree with your recommendations if you gave specific examples of (i) what you consider to be valuable past examples of strategy research, and (ii) how you’re planning to do strategy research going forward (or what methods you’d recommend to others).
I’m going to leave this question to David and Justin, since my collaboration with Convergence was only temporary and they are much better suited to talk about their research plans than I am.
Examples of easy wins
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you’re most aware of those?
I tend to be quite pessimistic about external researchers sitting at their desks and considering questions such as “how to best allocate resources between reducing various existential risks” in the abstract.
This is almost exactly the research question I will be looking at for my next project! (To be done at CSER as a summer research intern) I hope I can convince you once the research is done, or already with my research proposal ;)
I feel like you overstate the point that “[s]trategic uncertainty implies that interacting with the ‘environment’ has a reduced net value of information”. To me, this seems true only for some ways of interacting with your environment. In your example, a way of interacting with the environment that seems safe and like it has a high value of information would be to broadly understand how the government operates without making specific recommendations—e.g. by looking at relevant case studies, working in government, or interviewing government staff.
I agree with you here. We used the term ‘interacting’ while we should have used ‘affecting’ or ‘changing’. Simply interacting—being part of a system and/or observing it from the inside—can be very valuable and doesn’t seem very risky if one doesn’t try to make big changes. However, trying to affect/change the environment without sufficient strategic understanding could be very harmful.
Very loosely, I expect marginal activities that effectively reduce strategic uncertainty to look more like executives debating their companies strategy in a meeting rather than, say, Newton coming up with his theory of mechanics. I’m therefore reluctant to call them “research”.
My sense is that the best company strategies are informed by a host of strategy research and informing research from a group of employees and consultants. The discussions are of course enormously useful, but they give rise to questions that should be answered by research. In addition, I expect companies’ strategies to be much better tuned to their goals than x-risk oriented organizations: companies have a very clear feedback mechanism (profit) that we lack.
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you’re most aware of those?
My guess is that AI examples were most salient to me because AI has been the area I’ve thought about the most recently. I strongly suspect there are easy wins in other areas as well.
Hi Max, thank you for your engaging comment and sorry for the slow response! I’ll try to address your point one by one.
I think we are more in agreement here than it seems (although I suspect we still disagree). We framed the spine more as a one-way process for the sake of clarity, but it’s definitely very iterative where much feedback is needed from lower levels of informing research! Still, I believe there is a lot of strategy research to be done—perhaps especially for questions that are not attractive for academic papers, such as which actions and institutions are needed for reducing x-risk.
I’m going to leave this question to David and Justin, since my collaboration with Convergence was only temporary and they are much better suited to talk about their research plans than I am.
These are all about AI (except maybe the one about China). Is that because you believe the easy and valuable wins are only there, or because you’re most aware of those?
This is almost exactly the research question I will be looking at for my next project! (To be done at CSER as a summer research intern) I hope I can convince you once the research is done, or already with my research proposal ;)
I agree with you here. We used the term ‘interacting’ while we should have used ‘affecting’ or ‘changing’. Simply interacting—being part of a system and/or observing it from the inside—can be very valuable and doesn’t seem very risky if one doesn’t try to make big changes. However, trying to affect/change the environment without sufficient strategic understanding could be very harmful.
My sense is that the best company strategies are informed by a host of strategy research and informing research from a group of employees and consultants. The discussions are of course enormously useful, but they give rise to questions that should be answered by research. In addition, I expect companies’ strategies to be much better tuned to their goals than x-risk oriented organizations: companies have a very clear feedback mechanism (profit) that we lack.
My guess is that AI examples were most salient to me because AI has been the area I’ve thought about the most recently. I strongly suspect there are easy wins in other areas as well.