As someone who did recently set up an AI safety lab, success rates have certainly been on my mind. It’s certainly challenging, but I think the reference class we’re in might be better than it seems at first.
I think a big part of what makes succeeding as a for-profit tech start-up challenging is that so many other talented individuals are chasing the same, good ideas. For every Amazon there are 1000s of failed e-commerce start-ups. Clearly, Amazon did something much better than the competition. But what if Amazon didn’t exist? What if there was a company that was a little more expensive, and had longer shipping times? I’d wager that company would still be highly successful.
Far fewer people are working on AI safety. That’s a bad thing, but it does at least mean that there’s more low-hanging fruit to be tapped. I agree with [Adam Binks](https://forum.effectivealtruism.org/posts/PJLx7CwB4mtaDgmFc/critiques-of-non-existent-ai-safety-labs-yours?commentId=eLarcd8no5iKqFaNQ) that academic labs might be a better reference class. But even there, AI safety has had far less attention paid to it than e.g. developing treatments for cancer or unifying quantum mechanics and general relativity.
So overall it’s far from clear to me that it’s harder to make progress on AI safety than solve outstanding challenge problems in academia, or in trying to make a $1 bn+ company.
This is an important point. There’s a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you’re in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that’s great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest that time unless they have a strong signal it’s promising.
A lot of this problem is downstream of very limited grantmaker time in AI safety. I expect this to improve in the near future, but not enough to fully solve the problem.
I do like the idea of a more research agenda agnostic research organization. I’m striving to have FAR be more open-minded, but we can’t support everything so are still pretty opinionated to prioritize agendas that we’re most excited by & which are a good fit for our research style (engineering-intensive empirical work). I’d like to see another org in this space set-up to support a broader range of agendas, and am happy to advise people who’d like to set something like this up.