… why you would drop everything and race to be the first to build an aligned AGI if you’re Eliezer. But if you’re Paul, I’m not sure why you would do this, since you think it will only give you a modest advantage.
Good point. Maybe another thing here is that under Paul’s view, working on AGI / AI alignment now increases the probability that the whole AI development ecosystem heads in a good direction. (Prestigious + safe AI work increases the incentives for others to do safe AI work, so that they appear responsible.)
Speculative: perhaps the motivation for a lot of OpenAI’s AI development work is to increase its clout in the field, so that other research groups take the AI alignment stuff seriously. Also sucking up talented researchers to increase the overall proportion of AI researchers that are working in a group that takes safety seriously.
Good point. Maybe another thing here is that under Paul’s view, working on AGI / AI alignment now increases the probability that the whole AI development ecosystem heads in a good direction. (Prestigious + safe AI work increases the incentives for others to do safe AI work, so that they appear responsible.)
Speculative: perhaps the motivation for a lot of OpenAI’s AI development work is to increase its clout in the field, so that other research groups take the AI alignment stuff seriously. Also sucking up talented researchers to increase the overall proportion of AI researchers that are working in a group that takes safety seriously.