Part of me wonders if a better model than the one outlined in this post is for Nonlinear to collaborate with well-established AI research organisations who can advise on the high-impact interventions, for which Nonlinear then proceeds to do the grunt work to turn into a reality.
Even in this alternative model I agree that Nonlinear would probably benefit from someone with in-depth knowledge of AI safety as a full-time employee.
This is indeed part of our plan! No need to re-invent the wheel. :)
One of our first steps will be to canvas existing AI Safety organizations and compile a comprehensive list of ideas they want done. We will do our own due diligence before launching any of them, but I would love for it to be that Nonlinear is the organization people come to when they have a great idea that they want to have happen.
Part of me wonders if a better model than the one outlined in this post is for Nonlinear to collaborate with well-established AI research organisations who can advise on the high-impact interventions, for which Nonlinear then proceeds to do the grunt work to turn into a reality.
Even in this alternative model I agree that Nonlinear would probably benefit from someone with in-depth knowledge of AI safety as a full-time employee.
This is indeed part of our plan! No need to re-invent the wheel. :)
One of our first steps will be to canvas existing AI Safety organizations and compile a comprehensive list of ideas they want done. We will do our own due diligence before launching any of them, but I would love for it to be that Nonlinear is the organization people come to when they have a great idea that they want to have happen.
Sounds good!
Replied to hiring full-timer above https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation?commentId=ANTbuSPrNTwRHvw73