For one of my grad school classes we’ve been discussing “strategy” and “grand strategy”, and some of the readings even talk about theories of victory. I’ve been loosely tracking the “AI strategy” world, and now that I’m reading about strategy, I figured it might be helpful to share a framing that I’ve found helpful (but am still uncertain about):
It may feel tempting for some people to approach “strategy” (and somewhat relatedly, “theory” (e.g., IR theory)) as one might approach some problems in hard sciences: work hard to rigorously find the definitive right answer, and don’t go around spreading un-caveated claims you think are slightly wrong (or which could be interpreted incorrectly). However, I personally found it helpful to frame “strategy” in terms of an optimization problem with constraints:
You have constraints with regards to:
Information collection
Analysis and processing of information
(Consider: chess and Go are games of perfect information, but you cannot analyze every possibility)
And communication.
For example, you can’t expect a policymaker (or even many fellow researchers) to read a dense 1,000-page document, and you may not have the time to spend writing a 1,000 page document.
Some goal(s) (and anti-goals!):
Goal: It’s not just discovering and conveying (accurate) information: telling people the sun is going to rise tomorrow isn’t helpful since they already know/assume that, and telling people that the 10th decimal of pi is 5 usually isn’t very valuable even though most people don’t know that. Rather, the key is to convey information or create a sense of understanding that the audience (or yourself!):
Doesn’t already know,
Will believe/understand, and
Benefits from knowing or believing. (or, where it is beneficial that the audience understands this information)
Goal: identifying key ideas and labeling concepts can make discussion easier and/or more efficient, if people have a shared lexicon of ideas.
Goal: especially with strategy, you may have coordination problems: It might even be the case that people would see that the idea/priority conveyed by some strategy is crucial if other people also coordinate, but it might not actually be optimal to focus on at the marginal, individual level unless other people coordinate (and thus initially may not even seem like a good idea/priority or become the natural default).
Anti-goal: On the flipside, you want to avoid misleading yourself or your audience, according to the same standard above:
What will they not already mistakenly believe
Will believe, and
Are made worse off by knowing or believing
Anti-goal: there are also information hazards, like if you discover that the Nash equilibrium in a nuclear standoff is to try a pre-emptive strike, whereas the participants would have otherwise believed it was not advantageous.
Anti-goal: (the opposite of solving coordination problems: you cause people to shift from an optimally-diverse set of pursuits to one that is overfocused on specific problems)
Ultimately, the point of all this is to say that you have to have reasonable expectations with regards to the accuracy of “strategy” (and “theory”):
You probably shouldn’t expect (or even try) to find a theory that properly explains everything as you might expect in e.g., physics or mathematics. You can’t try to consider everything, which is why sometimes it might be best to just focus on a few important concepts
You need to balance epistemic benefits and “epistemic damage” (e.g., spreading confusion or inaccuracy)
You should try to optimize within your constraints, rather than leaving slack
Different audiences may have different constrains:
For example, policymakers are probably less technically competent and have less time to listen to all of your caveats and nuances.
Additionally, you might have far less time to respond to a policymaker’s request for analysis than if you are writing something that isn’t pressing.
I’d be interested to hear people’s thoughts! (It’s still fairly raw from my notes)
For one of my grad school classes we’ve been discussing “strategy” and “grand strategy”, and some of the readings even talk about theories of victory. I’ve been loosely tracking the “AI strategy” world, and now that I’m reading about strategy, I figured it might be helpful to share a framing that I’ve found helpful (but am still uncertain about):
It may feel tempting for some people to approach “strategy” (and somewhat relatedly, “theory” (e.g., IR theory)) as one might approach some problems in hard sciences: work hard to rigorously find the definitive right answer, and don’t go around spreading un-caveated claims you think are slightly wrong (or which could be interpreted incorrectly). However, I personally found it helpful to frame “strategy” in terms of an optimization problem with constraints:
You have constraints with regards to:
Information collection
Analysis and processing of information
(Consider: chess and Go are games of perfect information, but you cannot analyze every possibility)
And communication.
For example, you can’t expect a policymaker (or even many fellow researchers) to read a dense 1,000-page document, and you may not have the time to spend writing a 1,000 page document.
Some goal(s) (and anti-goals!):
Goal: It’s not just discovering and conveying (accurate) information: telling people the sun is going to rise tomorrow isn’t helpful since they already know/assume that, and telling people that the 10th decimal of pi is 5 usually isn’t very valuable even though most people don’t know that. Rather, the key is to convey information or create a sense of understanding that the audience (or yourself!):
Doesn’t already know,
Will believe/understand, and
Benefits from knowing or believing. (or, where it is beneficial that the audience understands this information)
Goal: identifying key ideas and labeling concepts can make discussion easier and/or more efficient, if people have a shared lexicon of ideas.
Goal: especially with strategy, you may have coordination problems: It might even be the case that people would see that the idea/priority conveyed by some strategy is crucial if other people also coordinate, but it might not actually be optimal to focus on at the marginal, individual level unless other people coordinate (and thus initially may not even seem like a good idea/priority or become the natural default).
Anti-goal: On the flipside, you want to avoid misleading yourself or your audience, according to the same standard above:
What will they not already mistakenly believe
Will believe, and
Are made worse off by knowing or believing
Anti-goal: there are also information hazards, like if you discover that the Nash equilibrium in a nuclear standoff is to try a pre-emptive strike, whereas the participants would have otherwise believed it was not advantageous.
Anti-goal: (the opposite of solving coordination problems: you cause people to shift from an optimally-diverse set of pursuits to one that is overfocused on specific problems)
Ultimately, the point of all this is to say that you have to have reasonable expectations with regards to the accuracy of “strategy” (and “theory”):
You probably shouldn’t expect (or even try) to find a theory that properly explains everything as you might expect in e.g., physics or mathematics. You can’t try to consider everything, which is why sometimes it might be best to just focus on a few important concepts
You need to balance epistemic benefits and “epistemic damage” (e.g., spreading confusion or inaccuracy)
You should try to optimize within your constraints, rather than leaving slack
Different audiences may have different constrains:
For example, policymakers are probably less technically competent and have less time to listen to all of your caveats and nuances.
Additionally, you might have far less time to respond to a policymaker’s request for analysis than if you are writing something that isn’t pressing.
I’d be interested to hear people’s thoughts! (It’s still fairly raw from my notes)