There are different inputs needed to advance AI safety: money, research talent, executive talent, and others. How do you see the tradeoff between these resources, and which seems most like a priority right now?
Looks like a few of Nate’s other answers partly address your question: “Right now we’re talent-constrained...” and “grow the research team...”
There are different inputs needed to advance AI safety: money, research talent, executive talent, and others. How do you see the tradeoff between these resources, and which seems most like a priority right now?
Looks like a few of Nate’s other answers partly address your question: “Right now we’re talent-constrained...” and “grow the research team...”