Three concerns on the Theory of Change, and a suggestion:
1. ToC’s tend to be neglected once written (hopefully not by y’all!) especially as they show a possible route to impact but not timings and “who all does what and by when” to achieve the shared goal by a particular date. (In this case, that could be the date you realistically expect bad or misaligned ASI to entrench.) Have you considered doing a Critical Path plan (aka “Critical Pathway”) as NASA did for Apollo?
2. Another problem is that the last stages of your plan rely on a single channel, with no redundancy. A critical path plan, which allows more than one route to achieving the goal of ASI safety implementation, could be life-saving?!
3. AI development is moving very fast, and this ToC would seem to be (a) quite long in time and (b) numerous in stages, which means lots of potential delays and failure modes, when
If you were in dialogue with any of ….
Met research, DSTL, GCHQ, Insider Threat teams, Global Threat teams, UCL Early Warning research centre, Kings College academics/military, UNDRR, NASA, ISRO’s AI safety committe under Prof Singvi, etc
.… and doing any of the following:
joint scoping, workshops, seminars, training, prioritisation and threat/risk work
.… you might achieve prevention, preparedness or response capacity much sooner?
I’m happy to discuss further by phone, or link you in to some of those orgs.
Great to see this.
Three concerns on the Theory of Change, and a suggestion:
1. ToC’s tend to be neglected once written (hopefully not by y’all!) especially as they show a possible route to impact but not timings and “who all does what and by when” to achieve the shared goal by a particular date. (In this case, that could be the date you realistically expect bad or misaligned ASI to entrench.) Have you considered doing a Critical Path plan (aka “Critical Pathway”) as NASA did for Apollo?
2. Another problem is that the last stages of your plan rely on a single channel, with no redundancy. A critical path plan, which allows more than one route to achieving the goal of ASI safety implementation, could be life-saving?!
3. AI development is moving very fast, and this ToC would seem to be (a) quite long in time and (b) numerous in stages, which means lots of potential delays and failure modes, when
If you were in dialogue with any of ….
.… and doing any of the following:
joint scoping, workshops, seminars, training, prioritisation and threat/risk work
.… you might achieve prevention, preparedness or response capacity much sooner?
I’m happy to discuss further by phone, or link you in to some of those orgs.