Thanks Peter! I appreciate the work you’ve put in to synthesising a large and growing set of activities.
Nicholas Moes and Caroline Jeanmaire wrote a piece, A Map to Navigate AI Governance, which set out Strategy as ‘upstream’ of typical governance activities. Michael Aird in a shortform post about x-risk policy ‘pipelines’ also set (macro)strategy upstream of other policy research, development, and advocacy activities.
One thing that could be interesting to explore is the current and ideal relationships between the work groups you describe here.
For example, in your government analogy, you describe Strategy as the executive branch, and each of the other work groups as agencies, departments, or specific functions (e.g., HR), which would be subordinate.
Does this reflect your thinking as well? Should AI strategy worker / organisations be deferred to by AI governance workers / organisations?
For example, in your government analogy, you describe Strategy as the executive branch, and each of the other work groups as agencies, departments, or specific functions (e.g., HR), which would be subordinate.
Does this reflect your thinking as well? Should AI strategy worker / organisations be deferred to by AI governance workers / organisations?
Thanks Alex! I agree that it could be interesting to explore the current and ideal relationships between the work groups. I’d like to see that happen in the future.
I think that deferring sounds a bit strong, but I suspect that many workers/organisations in AI governance (and in other work groups) would like strategic insights from people working on AI Strategy and Movement Building. For instance, on questions like:
What is the AI Safety community’s agreement with/enthusiasm for specific visions, organisations and research agendas?
What are the key disagreements between competing visions for AI risk mitigation and the practical implications?
Which outcomes are good metrics to optimise for?
Who is doing/planning what in relevant domains, and what are the practical implications for a subset of workers/organisations plans?
With that said, I don’t really have any well-formed opinions about how things should work just yet!
Thanks Peter! I appreciate the work you’ve put in to synthesising a large and growing set of activities.
Nicholas Moes and Caroline Jeanmaire wrote a piece, A Map to Navigate AI Governance, which set out Strategy as ‘upstream’ of typical governance activities. Michael Aird in a shortform post about x-risk policy ‘pipelines’ also set (macro)strategy upstream of other policy research, development, and advocacy activities.
One thing that could be interesting to explore is the current and ideal relationships between the work groups you describe here.
For example, in your government analogy, you describe Strategy as the executive branch, and each of the other work groups as agencies, departments, or specific functions (e.g., HR), which would be subordinate.
Does this reflect your thinking as well? Should AI strategy worker / organisations be deferred to by AI governance workers / organisations?
Thanks Alex! I agree that it could be interesting to explore the current and ideal relationships between the work groups. I’d like to see that happen in the future.
I think that deferring sounds a bit strong, but I suspect that many workers/organisations in AI governance (and in other work groups) would like strategic insights from people working on AI Strategy and Movement Building. For instance, on questions like:
What is the AI Safety community’s agreement with/enthusiasm for specific visions, organisations and research agendas?
What are the key disagreements between competing visions for AI risk mitigation and the practical implications?
Which outcomes are good metrics to optimise for?
Who is doing/planning what in relevant domains, and what are the practical implications for a subset of workers/organisations plans?
With that said, I don’t really have any well-formed opinions about how things should work just yet!