As someone who did recently set up an AI safety lab, success rates have certainly been on my mind. It’s certainly challenging, but I think the reference class we’re in might be better than it seems at first.
I think a big part of what makes succeeding as a for-profit tech start-up challenging is that so many other talented individuals are chasing the same, good ideas. For every Amazon there are 1000s of failed e-commerce start-ups. Clearly, Amazon did something much better than the competition. But what if Amazon didn’t exist? What if there was a company that was a little more expensive, and had longer shipping times? I’d wager that company would still be highly successful.
So overall it’s far from clear to me that it’s harder to make progress on AI safety than solve outstanding challenge problems in academia, or in trying to make a $1 bn+ company.
As someone who did recently set up an AI safety lab, success rates have certainly been on my mind. It’s certainly challenging, but I think the reference class we’re in might be better than it seems at first.
I think a big part of what makes succeeding as a for-profit tech start-up challenging is that so many other talented individuals are chasing the same, good ideas. For every Amazon there are 1000s of failed e-commerce start-ups. Clearly, Amazon did something much better than the competition. But what if Amazon didn’t exist? What if there was a company that was a little more expensive, and had longer shipping times? I’d wager that company would still be highly successful.
Far fewer people are working on AI safety. That’s a bad thing, but it does at least mean that there’s more low-hanging fruit to be tapped. I agree with [Adam Binks](https://forum.effectivealtruism.org/posts/PJLx7CwB4mtaDgmFc/critiques-of-non-existent-ai-safety-labs-yours?commentId=eLarcd8no5iKqFaNQ) that academic labs might be a better reference class. But even there, AI safety has had far less attention paid to it than e.g. developing treatments for cancer or unifying quantum mechanics and general relativity.
So overall it’s far from clear to me that it’s harder to make progress on AI safety than solve outstanding challenge problems in academia, or in trying to make a $1 bn+ company.