I found this post very interesting and useful for my own thinking about the subject.
Note that while the conclusions here are ones intended for OP specifically, there’s actually another striking conclusion that goes against the ideas of many from this community: we need more evidence! We need to build stronger AI (perhaps in more strictly regulated contexts) in order to have enough data to reason about the dangers from it. The “arms race” of DeepMind and OpenAI is not existentially dangerous to the world, but is rather contributing to its chance of survival.
This is still at odds, of course, with the fact that rapid advancements in AI create well-known non-existential dangers, so at times we trade off mitigating those with finding out more about existentially-dangerous AI. This is not an easy decision, and should be paid attention to, especially if you’re not a longtermist.
I found this post very interesting and useful for my own thinking about the subject.
Note that while the conclusions here are ones intended for OP specifically, there’s actually another striking conclusion that goes against the ideas of many from this community: we need more evidence! We need to build stronger AI (perhaps in more strictly regulated contexts) in order to have enough data to reason about the dangers from it. The “arms race” of DeepMind and OpenAI is not existentially dangerous to the world, but is rather contributing to its chance of survival.
This is still at odds, of course, with the fact that rapid advancements in AI create well-known non-existential dangers, so at times we trade off mitigating those with finding out more about existentially-dangerous AI. This is not an easy decision, and should be paid attention to, especially if you’re not a longtermist.