I appreciate that these are the perspectives’ “(oversimplified) slogan form,” but still, I identify with 10 of these perspectives, and I strongly believe that this is basically correct. There are several different kinds of ways to make the future go better, and we should do all of them, pursuing whatever opportunities arise and adapting based on changing facts (e.g., about polarity) and possibilities (e.g., for coordination). So I’m skeptical of thinking in terms of these perspectives; I would think in terms of ways-to-make-the-future-go-better. A quick list corresponding directly to your perspectives:
Do macrostrategy
Enable a safe pivotal act
Promote alignment & safety
Increase lead times and benevolence/wisdom/etc. of leader
Coordinate (at the AI-lab level) to slow capability gains
Improve governance for TAI (make good policies)
Gain influence around AI
Improve institutions/coordination/civilization/etc. for TAI
Improve governance for TAI (set good precedent)
Improve governance for TAI (increase flexibility)
Coordinate (at a high level) to slow capability gains
Improve institutions/coordination/civilization/etc. generally
Agree on aggregate it’s good for a collection of people to pursue many different strategies, but would you personally/individually weight all of these equally? If so, maybe you’re just uncertain? My guess is that you don’t weight all of these equally. Maybe another framing is to put probabilities on each and then dedicate the appropriate proportion of resources accordingly. This is a very top down approach though and in reality people will do what they will! I guess it seems hard to span more than two beliefs next to each other on any axis as an individual to me. And when I look at my work and my beliefs personally, that checks out.
Thanks for these points! I like the rephrasing of it as ‘levers’ or pathways, thosea re also good.
A downside of the term ‘strategic perspective’ is certainly that it implies that you need to ‘pick one’, that a categorical choice needs to be made amongst them. However:
-it is clearly possible to combine and work across a number of these perspectives simultaneously, so they’re not mutually exclusive in terms of interventions;
-in fact, under existing uncertainty over TAI timelines and governance conditions (i.e. parameters), it is probably preferable to pursue such a portfolio approach, rather than adopt any one perspective as the ‘consensus one’.
still, as tamgent notes, this mostly owes to our current uncertainty: once you start to take stronger positions on (or assign certain probabilities to) particular scenarios, not all of these pathways are an equally good investment of resources
-indeed, some of these approaches will likely entail actions that will stand in tension to one another’s interventions (e.g. Anticipatory perspectives would recommend talking explicitly about AGI to policymakers; some versions of Path-setting, Network-building, or Pivotal Engineering would prefer to avoid that (for different reasons). A partisan perspective would prefer actions that might align the community with one actor; that might stand in tension to actions taken by a Coalitional (or multilateral Path-setting) perspectives; etc.).
I do agree that the ‘Perspectives’ framing may be too suggestive of an exclusive, coherent position that people in this space must take, when what I mean is more a loosely coherent cluster of views.
--
@tamgent “it seems hard to span more than two beliefs next to each other on any axis as an individual to me” could you clarify what you meant by this?
Good post!
I appreciate that these are the perspectives’ “(oversimplified) slogan form,” but still, I identify with 10 of these perspectives, and I strongly believe that this is basically correct. There are several different kinds of ways to make the future go better, and we should do all of them, pursuing whatever opportunities arise and adapting based on changing facts (e.g., about polarity) and possibilities (e.g., for coordination). So I’m skeptical of thinking in terms of these perspectives; I would think in terms of ways-to-make-the-future-go-better. A quick list corresponding directly to your perspectives:
Do macrostrategy
Enable a safe pivotal act
Promote alignment & safety
Increase lead times and benevolence/wisdom/etc. of leader
Coordinate (at the AI-lab level) to slow capability gains
Improve governance for TAI (make good policies)
Gain influence around AI
Improve institutions/coordination/civilization/etc. for TAI
Improve governance for TAI (set good precedent)
Improve governance for TAI (increase flexibility)
Coordinate (at a high level) to slow capability gains
Improve institutions/coordination/civilization/etc. generally
Gain influence generally
Do non-AI good stuff
X
We should more or less do all of these!
Agree on aggregate it’s good for a collection of people to pursue many different strategies, but would you personally/individually weight all of these equally? If so, maybe you’re just uncertain? My guess is that you don’t weight all of these equally. Maybe another framing is to put probabilities on each and then dedicate the appropriate proportion of resources accordingly. This is a very top down approach though and in reality people will do what they will! I guess it seems hard to span more than two beliefs next to each other on any axis as an individual to me. And when I look at my work and my beliefs personally, that checks out.
Of course they’re not equal in either expected value relative to status quo or appropriate level of resources to spend
I don’t think you can “put probabilities on each”—probabilities of what?
Sorry more like a finite budget and proportions, not probabilities.
Sure, of course. I just don’t think that looks like adopting a particular perspective.
Thanks for these points! I like the rephrasing of it as ‘levers’ or pathways, thosea re also good.
A downside of the term ‘strategic perspective’ is certainly that it implies that you need to ‘pick one’, that a categorical choice needs to be made amongst them. However:
-it is clearly possible to combine and work across a number of these perspectives simultaneously, so they’re not mutually exclusive in terms of interventions; -in fact, under existing uncertainty over TAI timelines and governance conditions (i.e. parameters), it is probably preferable to pursue such a portfolio approach, rather than adopt any one perspective as the ‘consensus one’.
still, as tamgent notes, this mostly owes to our current uncertainty: once you start to take stronger positions on (or assign certain probabilities to) particular scenarios, not all of these pathways are an equally good investment of resources -indeed, some of these approaches will likely entail actions that will stand in tension to one another’s interventions (e.g. Anticipatory perspectives would recommend talking explicitly about AGI to policymakers; some versions of Path-setting, Network-building, or Pivotal Engineering would prefer to avoid that (for different reasons). A partisan perspective would prefer actions that might align the community with one actor; that might stand in tension to actions taken by a Coalitional (or multilateral Path-setting) perspectives; etc.).
I do agree that the ‘Perspectives’ framing may be too suggestive of an exclusive, coherent position that people in this space must take, when what I mean is more a loosely coherent cluster of views.
--
@tamgent “it seems hard to span more than two beliefs next to each other on any axis as an individual to me” could you clarify what you meant by this?