Really nice, I haven’t thought about this much before, thanks for sharing your account of the landscape. Some thoughts in reaction:
1. AI acceleration is bad under all circumstances
AGI might be really terrible. Thus, everything that makes it come earlier is bad.
I assume everybody in AI Safety thinks “AGI might be really terrible”, so I’d sketch this differently. I assume people in this category think something like “There is a high likelihood that we won’t have enough time to design architectures for aligned systems, and the current technological pathways are very likely hopelessly doomed to fail”? (ah, you elaborate these points later)
3. We should further the state-of-the-art to increase control over relevant AI systems
By being in control of relevant knowledge about AI algorithms or relevant architecture, aligned actors could control who gets access and thus decrease the risk of misalignment.
In the best case, aligned actors have sufficient power to decide who gets access to the most capable models or compute.
Maybe worth considering: I expect you would also have significant influence on what the most talented junior researchers will work on, as they will be drawn to the most technologically exciting research hubs. Would be interesting to hear from people at for example OpenAI’s and DeepMind’s safety teams, whether they think they have an easier time attracting very talented people who are not (initially) motivated by safety concerns.
Another thought, I think it’s plausible that at some point there will be some effort of coordination between all research groups that are succesfully working on highly advanced AI systems. Having a seat at that table might be extremely valuable for arguing for increased safety investments by that group.
Really nice, I haven’t thought about this much before, thanks for sharing your account of the landscape. Some thoughts in reaction:
I assume everybody in AI Safety thinks “AGI might be really terrible”, so I’d sketch this differently. I assume people in this category think something like “There is a high likelihood that we won’t have enough time to design architectures for aligned systems, and the current technological pathways are very likely hopelessly doomed to fail”? (ah, you elaborate these points later)
Maybe worth considering: I expect you would also have significant influence on what the most talented junior researchers will work on, as they will be drawn to the most technologically exciting research hubs. Would be interesting to hear from people at for example OpenAI’s and DeepMind’s safety teams, whether they think they have an easier time attracting very talented people who are not (initially) motivated by safety concerns.
Another thought, I think it’s plausible that at some point there will be some effort of coordination between all research groups that are succesfully working on highly advanced AI systems. Having a seat at that table might be extremely valuable for arguing for increased safety investments by that group.