(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some hot takes on AI governance field-building strategy
More people should consciously upskill as ‘founders’, i.e. people who form and lead new teams/centres/etc. focused on making AI go well
A case for more founders: plausibly in crunch time there will be many more people/teams within labs/govs/think-tanks/etc. that will matter for how AI goes. Would be good if those teams were staffed with thoughtful and risk-conscious people.
What I think is required to be a successful founder:
Strong in strategy (to steer their team in useful directions), management (for obvious reasons) and whatever object level work their team is doing
Especially for teams within existing institutions, starting a new team requires skill in stakeholder management and consensus building.
Concrete thing you might consider doing: if you think you might want to be a founder, and you agree with the above list of skills, think above how to close your skill gaps
More people should consciously upskill for the “AI endgame” (aka “acute risk period” aka “crunch time”). What might be different in the endgame and what does this imply about what people should do now?
Lots of ‘task force-style advising’ work
→ people should practise it now
Everyone will be very busy, especially senior people, so it won’t work as well to just defer
→ build your own models
More possible to mess things up real bad
→ start thinking harder about worst-case scenarios, red-teaming, etc. now, even if it seems a bit silly to e.g. spend time tightening up your personal infosec
The world may well be changing scarily fast
→ practice decision-making under pressure and uncertainty. Strategy might get even harder in the endgame
Being able to juggle 6 different kinds of things might be more valuable than being able to do one thing really well, because there might just be lots of different kinds of things to do (cf. ‘task force-style advising’)
→ specialise less? But specialisation tends to be pretty valuable, so I’m not sure this carries much weight overall
(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)
Some hot takes on AI governance field-building strategy
More people should consciously upskill as ‘founders’, i.e. people who form and lead new teams/centres/etc. focused on making AI go well
A case for more founders: plausibly in crunch time there will be many more people/teams within labs/govs/think-tanks/etc. that will matter for how AI goes. Would be good if those teams were staffed with thoughtful and risk-conscious people.
What I think is required to be a successful founder:
Strong in strategy (to steer their team in useful directions), management (for obvious reasons) and whatever object level work their team is doing
Especially for teams within existing institutions, starting a new team requires skill in stakeholder management and consensus building.
Concrete thing you might consider doing: if you think you might want to be a founder, and you agree with the above list of skills, think above how to close your skill gaps
More people should consciously upskill for the “AI endgame” (aka “acute risk period” aka “crunch time”). What might be different in the endgame and what does this imply about what people should do now?
Lots of ‘task force-style advising’ work
→ people should practise it now
Everyone will be very busy, especially senior people, so it won’t work as well to just defer
→ build your own models
More possible to mess things up real bad
→ start thinking harder about worst-case scenarios, red-teaming, etc. now, even if it seems a bit silly to e.g. spend time tightening up your personal infosec
The world may well be changing scarily fast
→ practice decision-making under pressure and uncertainty. Strategy might get even harder in the endgame
Being able to juggle 6 different kinds of things might be more valuable than being able to do one thing really well, because there might just be lots of different kinds of things to do (cf. ‘task force-style advising’)
→ specialise less? But specialisation tends to be pretty valuable, so I’m not sure this carries much weight overall
Relationships and reputation matter
→ build them now