One way in which AI safety labs are different than the reference class of Y-combinator startups is in their impact. Conditioned on the median Forum user’s assessment of X-risk from AI, the leader of a major AI safety lab probably has more impact that the median U.S. senator, Fortune 500 CEO, or chief executive of smaller regional or even national governments, etc. Those jobs are hard in their own ways, but we expect and even encourage an extremely high amount of criticism.
I am not suggesting that is the proper reference class for leaders of AI labs that have raised at least $10MM . . . and I don’t think it is. But I think the proper scope of criticism is significantly higher than for (e.g.) the median CEO whose company went through Y Combinator.[1] If a startup CEO messes up and their company explodes, the pain is generally going to be concentrated in the company’s investors, lenders, and employees . . . a small number of people, each of whom who consented to bearing that risk to a significant extent. If I’m not one of those people, my standing to complain about the startup CEO’s mistakes is significantly constrained.
In contrast, if an AI safety lab goes off the rails and becomes net-negative, that affects us all (and futute generations). Even if the lab is merely ineffective, its existence would have drained fairly scarce resources (potential alignment researchers and EA funding) from others in the field.
I definitively agree that people need to be sensitive to how hard running an AI safety lab is, but also want to affirm that the idea of criticism is legitimate.
To be clear, I don’t think Anneal’s post suggests that this is the reference class for deciding how much criticism of AI lab leaders is warranted. However, since I didn’t see a clear reference class, I thought it was worthwhile to discuss this one.
One way in which AI safety labs are different than the reference class of Y-combinator startups is in their impact. Conditioned on the median Forum user’s assessment of X-risk from AI, the leader of a major AI safety lab probably has more impact that the median U.S. senator, Fortune 500 CEO, or chief executive of smaller regional or even national governments, etc. Those jobs are hard in their own ways, but we expect and even encourage an extremely high amount of criticism.
I am not suggesting that is the proper reference class for leaders of AI labs that have raised at least $10MM . . . and I don’t think it is. But I think the proper scope of criticism is significantly higher than for (e.g.) the median CEO whose company went through Y Combinator.[1] If a startup CEO messes up and their company explodes, the pain is generally going to be concentrated in the company’s investors, lenders, and employees . . . a small number of people, each of whom who consented to bearing that risk to a significant extent. If I’m not one of those people, my standing to complain about the startup CEO’s mistakes is significantly constrained.
In contrast, if an AI safety lab goes off the rails and becomes net-negative, that affects us all (and futute generations). Even if the lab is merely ineffective, its existence would have drained fairly scarce resources (potential alignment researchers and EA funding) from others in the field.
I definitively agree that people need to be sensitive to how hard running an AI safety lab is, but also want to affirm that the idea of criticism is legitimate.
To be clear, I don’t think Anneal’s post suggests that this is the reference class for deciding how much criticism of AI lab leaders is warranted. However, since I didn’t see a clear reference class, I thought it was worthwhile to discuss this one.