I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people’s radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.
I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people’s radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.