Four components of strategy research

Con­ver­gence Anal­y­sis re­searches strate­gies for ex­is­ten­tial risk re­duc­tion. We’ve pre­vi­ously writ­ten a post about what strat­egy re­search is and why it’s valuable (both in the con­text of x-risks and more gen­er­ally). That post dis­t­in­guished be­tween val­ues re­search, strat­egy re­search, tac­tics re­search, and im­ple­men­ta­tion, as shown in the fol­low­ing di­a­gram:

Fur­ther­more, that post pro­vided a defi­ni­tion of strat­egy re­search: “High-level re­search on how to best achieve a high-level goal.” (Strate­gies them­selves will be com­bi­na­tions of spe­cific in­ter­ven­tions, co­or­di­nated and se­quenced in spe­cific ways, and aimed at achiev­ing high-level goals.)

In this post, we:

  • Out­line one way to de­com­pose strat­egy research

    • Speci­fi­cally, we break it down into the fol­low­ing four com­po­nents: map­ping the space, con­struct­ing strate­gies, mod­el­ling causal­ity, and pri­ori­tiz­ing be­tween strategies

  • De­scribe how each of those com­po­nents can be impactful

  • De­scribe how each of those com­po­nents can be conducted

  • Provide ex­am­ples to illus­trate these points

We hope this model can help guide fu­ture strat­egy re­search and fa­cil­i­tate clearer think­ing and dis­cus­sion on the topic.

One thing to note: To some ex­tent, these com­po­nents can be seen as stages, typ­i­cally best ap­proached in the or­der in which we’ve listed them here. But there will also of­ten be value in jump­ing back and forth be­tween them, or in some peo­ple spe­cial­is­ing for a cer­tain com­po­nent rather than mov­ing through them in or­der. We dis­cussed similar points in our ear­lier post in re­la­tion to val­ues re­search, strat­egy re­search, tac­tics re­search, and im­ple­men­ta­tion.

Map­ping the space

Once we’ve formed a high-level goal (through val­ues re­search, or some other way or re­solv­ing value un­cer­tainty), we must de­velop strate­gies to achieve this goal. But ini­tially, we may not even know what ac­tual ac­tions (i.e., in­ter­ven­tions) we have to choose from, or what con­se­quences we should be think­ing about. It may be as if the space con­sists en­tirely of “un­known un­knowns”. Thus, we must first map the space, to iden­tify as many op­tions and con­se­quences worth con­sid­er­ing as we can.

In the con­text of x-risk re­duc­tion, re­search ques­tions for map­ping the space in­clude:

  • What are the var­i­ous ways we can com­bat x-risk?

  • How could x-risk re­duc­tion efforts back­fire?

  • What are pos­si­ble x-risk fac­tors?

Only once we have at least some an­swers to such map­ping ques­tions can we pro­ceed to the other com­po­nents of strat­egy re­search (con­struct­ing strate­gies, mod­el­ling, and pri­ori­tiz­ing). To illus­trate this, imag­ine try­ing to come up with x-risk re­duc­tion strate­gies be­fore even notic­ing that out­reach to na­tional se­cu­rity poli­cy­mak­ers is one in­ter­ven­tion you could use, that overly dra­matic or sim­plis­tic out­reach could make it less likely fu­ture out­reach will be taken se­ri­ously, or that mis­al­igned AI is one risk fac­tor worth think­ing about.

How does one ac­tu­ally do map­ping re­search? This will likely vary a lot de­pend­ing on the cause area of in­ter­est (e.g., x-risk, an­i­mal welfare) and on how much map­ping has already oc­curred. At the very start of map­ping, you might ask:

  • What in­ter­ven­tions are already be­ing ap­plied (or have been or will be ap­plied) in this do­main by other peo­ple?

  • What in­ter­ven­tions used in similar do­mains may be use­ful here?

Once at least some op­tions have been iden­ti­fied, you might ask:

  • What in­ter­ven­tions lie be­tween those we’ve imag­ined so far? Is there a way we can pro­duce a hy­brid of two in­ter­ven­tions which has the good prop­er­ties of both but the down­sides of nei­ther? Per­haps a com­pro­mise or mixed strat­egy is best?

  • What in­ter­ven­tions lie be­yond those we’ve imag­ined so far? What if we take a cer­tain in­ter­ven­tion and ex­ag­ger­ate part of it; might that im­prove it?

  • What in­ter­ven­tions lie per­pen­dicu­lar to those we’ve imag­ined so far? What if we were to do some­thing to­tally differ­ent from the in­ter­ven­tions iden­ti­fied so far?

We can also ask analo­gous ques­tions to iden­tify po­ten­tial con­se­quences. E.g., what con­se­quences are already be­ing faced? What con­se­quences have been faced in similar do­mains? What con­se­quences may lie be­tween, be­yond, or per­pen­dicu­lar to those we’ve imag­ined so far?

Con­struct­ing strategies

Let’s say we’ve done some map­ping, and iden­ti­fied some in­ter­ven­tions and con­se­quences worth con­sid­er­ing. What now?

We shouldn’t sim­ply pick an in­ter­ven­tion and use it. Nor should we even spend too much effort on mod­el­ling the effects of (or pri­ori­tiz­ing be­tween) these in­ter­ven­tions them­selves, as if each would be ex­e­cuted in iso­la­tion.

This is be­cause it’s likely that we (or the broader com­mu­nity we’re part of) will use mul­ti­ple in­ter­ven­tions, and that there’ll be in­ter­ac­tions be­tween their effects, and bet­ter vs worse ways to se­quence them.

Thus, ideally, we should first con­struct strate­gies—i.e., think of spe­cific ways of com­bin­ing, co­or­di­nat­ing, and se­quenc­ing in­ter­ven­tions—and then model the effects of (and pri­ori­tize be­tween) those strate­gies.

For ex­am­ple, we wouldn’t want to view in iso­la­tion the op­tions of “fund­ing biose­cu­rity-re­lated PhDs”, “re­cruit­ing se­nior re­searchers to do biose­cu­rity re­search”, and “con­nect­ing biose­cu­rity ex­perts with rele­vant poli­cy­mak­ers”. In­stead, we might recog­nise that a com­bi­na­tion of these in­ter­ven­tions could be more pow­er­ful than the sum of its parts, and that the or­der in which we ex­e­cute these in­ter­ven­tions will also mat­ter.

To con­struct strate­gies, we might ask ques­tions like:

  • What in­ter­ven­tions seem like they would nat­u­rally fit to­gether or sup­port each other?

  • Which com­bi­na­tions might cre­ate syn­er­gies? Which might cre­ate fric­tions or nega­tive in­ter­ac­tion effects?

  • What is the or­der in which the in­ter­ven­tions should be used? Should they be ex­e­cuted one at a time, in par­allel, or with just par­tial over­laps?

Model­ling causality

Once we’ve con­structed strate­gies, we should build causal mod­els for them. In other words, once we’ve linked in­ter­ven­tions to­gether into pack­ages, we should see how each pack­age might link to­gether with out­comes we care about.

As with all mod­el­ling, we want our mod­els to sim­plify our com­plex world, help us sep­a­rate use­ful les­sons from noise, and help us think about how the rele­vant parts of the world are con­figured and how they in­ter­act.

We may build qual­i­ta­tive mod­els, such as this post’s model of four com­po­nents of strat­egy re­search (each of which is an in­ter­ven­tion we can use) and the sorts of im­pacts (con­se­quences) each can have. We may also build more quan­ti­ta­tive mod­els, such as ALLFED’s model of the effects of a pack­age of in­ter­ven­tions re­lated to al­ter­na­tive foods.

To model causal­ity, we might ask ques­tions like:

  • What effects has this kind of strat­egy (or the in­ter­ven­tions its com­posed of) had in the past?

  • What effects have similar strate­gies had in the past?

Pri­ori­tiz­ing be­tween strategies

Say we’ve now iden­ti­fied many pos­si­ble in­ter­ven­tions and con­se­quences (mapped the space); iden­ti­fied ways of com­bin­ing, co­or­di­nat­ing, and se­quenc­ing these (con­structed strate­gies); and thought about the out­comes each strat­egy might lead to (mod­el­led causal­ity). We also have ideas about how good or bad those out­comes would be (based on our val­ues or moral views). This will leave us ready to pri­ori­tize be­tween these strate­gies. Such pri­ori­ti­za­tion can be ex­tremely valuable, as it’s likely that the best strate­gies will be far bet­ter than the typ­i­cal strate­gies (in a pat­tern found across many do­mains, such as startup suc­cess or re­searcher out­put).

Pri­ori­ti­za­tion, as we use the term here, in­volves an­swer­ing ques­tions like:

  • What is the ex­pected value of each strat­egy we’re con­sid­er­ing? (This could be calcu­lated ex­plic­itly, but we could also use qual­i­ta­tive meth­ods that cap­ture the same ba­sic idea.)

  • What are the best op­por­tu­ni­ties to re­duce the risk of nu­clear war?

  • Which x-risks should re­ceive most of our money/​at­ten­tion, long-term?

  • Which x-risks should re­ceive our money/​at­ten­tion first?

Re­la­tion­ship to tac­tics research

Roughly speak­ing, tac­tics re­search in­volves ba­si­cally the same com­po­nents of re­search listed above, but at a more con­crete level. This is be­cause there isn’t re­ally a clear line be­tween “strate­gies” and “in­ter­ven­tions”; just as all strate­gies are com­posed of var­i­ous in­ter­ven­tions, all in­ter­ven­tions are in re­al­ity com­posed of var­i­ous even more con­crete steps. So tac­tics re­search in­volves, among other things:

  1. Map­ping the space of very spe­cific steps that could be taken (e.g., emailing vs in-per­son con­tact, as differ­ent steps for re­cruit­ing AI safety re­searchers)

  2. Con­struct­ing in­ter­ven­tions out of these very spe­cific steps (e.g., in-per­son con­tact from an es­tab­lished re­searcher talk­ing about the or­thog­o­nal­ity the­sis to ma­chine learn­ing PhD stu­dents)

  3. Model­ling the effects of those interventions

  4. Pri­ori­tiz­ing be­tween those interventions

There are ob­vi­ously ways in which these com­po­nents of tac­tics re­search over­lap with the above­men­tioned com­po­nents of strat­egy re­search. This makes sense; we should ex­pect fuzzy bound­aries, feed­back loops, and jump­ing back and forth be­tween strat­egy and tac­tics re­search.

Conclusion

In this post, we’ve out­lined a model of strat­egy re­search as com­posed of four com­po­nents: map­ping the space, con­struct­ing strate­gies, mod­el­ling causal­ity, and pri­ori­tiz­ing be­tween strate­gies. We’re sure other mod­els could be gen­er­ated, and could likely also be helpful. But this is a model that has made our own think­ing more clear and effec­tive, and we hope it can do the same for oth­ers re­search­ing strate­gies for press­ing global prob­lems, and for those aiming to un­der­stand or use such re­search.

This post blends to­gether parts writ­ten by Justin Shov­e­lain, Siebe Rozen­dal, Michael Aird, and David Kristoffers­son. We also re­ceived use­ful feed­back from Ben Harack.