From this post, I infer that the rough, big-picture theory of change for Nonlinear is as follows:
“Our team of analysts generate, identify, and evaluate potentially high impact opportunities.”
“Once a top idea has been vetted, we use a variety of tools to turn it into a reality, including grantmaking, advocacy, RFPs, and incubating it ourselves.”
“The existence of those projects/ideas/opportunities causes a reduction in existential risk from AI”
Does that sound accurate to you? In particular, is that third step your primary pathways and objective, or do you have other pathways in mind (like the publication of Nonlinear’s research reports having an impact itself) or other objectives (like trajectory changes other than existential risk reduction)?
Also, do you already have a more explicit and fleshed out theory of change? Perhaps in diagram form? This might cover things like what audiences you seek to reach, what kinds of projects you seek to create, and what sorts of ways you think they’ll reduce risks. (This is just a question, not a veiled critique; I think your current theory of change may be sufficiently explicit and fleshed out for this very early stage of the project.)
ETA: Ah, I now see that your site’s About page already provides more info on this. I think that shifts me from wondering what you see the relevant pathways and objectives are to wondering how much weight you expect to put on each (e.g., x-risk reduction generally vs AI safety vs cause prioritisation, or grantmaking vs action recommendations vs donation recommendations).
From this post, I infer that the rough, big-picture theory of change for Nonlinear is as follows:
“Our team of analysts generate, identify, and evaluate potentially high impact opportunities.”
“Once a top idea has been vetted, we use a variety of tools to turn it into a reality, including grantmaking, advocacy, RFPs, and incubating it ourselves.”
“The existence of those projects/ideas/opportunities causes a reduction in existential risk from AI”
Does that sound accurate to you? In particular, is that third step your primary pathways and objective, or do you have other pathways in mind (like the publication of Nonlinear’s research reports having an impact itself) or other objectives (like trajectory changes other than existential risk reduction)?
Also, do you already have a more explicit and fleshed out theory of change? Perhaps in diagram form? This might cover things like what audiences you seek to reach, what kinds of projects you seek to create, and what sorts of ways you think they’ll reduce risks. (This is just a question, not a veiled critique; I think your current theory of change may be sufficiently explicit and fleshed out for this very early stage of the project.)
ETA: Ah, I now see that your site’s About page already provides more info on this. I think that shifts me from wondering what you see the relevant pathways and objectives are to wondering how much weight you expect to put on each (e.g., x-risk reduction generally vs AI safety vs cause prioritisation, or grantmaking vs action recommendations vs donation recommendations).