Semi-related: Could you say more about precisely what the scope of Nonlinear will/might be?
Some possibilities that come to mind, in terms of areas addressed:
Just direct technical AI safety work
Also “meta” work that increases the amount/quality of direct technical AI safety work, e.g. the AI safety camp
Also AI governance work
Also work on other existential risks or longtermist priorities
Also work that’s not focused specifically on AI safety or governance but could still help with work on those things, such as work on forecasting or improving institutional decision-making
And some possibilities that come to mind in terms of type of project:
Tuition costs (e.g. for PhD students)
Teaching buyouts
Independent research projects lasting something like 0.2-2 FTE years
Projects like the AI safety camp
Projects like new startups working on building aligned AGI
This organization is interesting. I have a few questions, which I’ll split into different questions, so people can vote on them separately:
What made you decide to start an organization on researching high leverage AI Safety interventions?
Semi-related: Could you say more about precisely what the scope of Nonlinear will/might be?
Some possibilities that come to mind, in terms of areas addressed:
Just direct technical AI safety work
Also “meta” work that increases the amount/quality of direct technical AI safety work, e.g. the AI safety camp
Also AI governance work
Also work on other existential risks or longtermist priorities
Also work that’s not focused specifically on AI safety or governance but could still help with work on those things, such as work on forecasting or improving institutional decision-making
And some possibilities that come to mind in terms of type of project:
Tuition costs (e.g. for PhD students)
Teaching buyouts
Independent research projects lasting something like 0.2-2 FTE years
Projects like the AI safety camp
Projects like new startups working on building aligned AGI