I thought this post was wonderful. Very interestingly written thoughtful and insightful. Thank you for writing. And good luck with your next steps of figuring out this problem. It makes me want to write something similar, I have been in EA circles for a long time now and to some degree have also failed to form strong views on AI safety. Also I thought your next steps were fantastic and very sensible, I would love to hear your future thoughts on all of those topics.
On your next steps, picking up on:
To evaluate the importance of AI risk against other x-risk I should know more about where the likelihood estimates come from.
I was thinking of something similar to compare: bio risk, AI risk, and unknown unknow risks. However I was thinking if I was putting time into this I would not focus solely on understanding the likelihood estimates but would look for a broad range of evidence. E.g. on AI and bio could compare the risks by looking at: what are the limitations on what AI/bio systems are able to do, what do experts in this field think of the risks, are there good historical analogues for each risk type, how convincing are the case studies of the best things people are doing to prevent risk from AI/bio, how does the topic look on a scale neglectedness tractability comparison, etc, etc.
Anyway just my thoughts on this research topic. Do reach out if you dive into that direction and want to discuss more.
I thought this post was wonderful. Very interestingly written thoughtful and insightful. Thank you for writing. And good luck with your next steps of figuring out this problem. It makes me want to write something similar, I have been in EA circles for a long time now and to some degree have also failed to form strong views on AI safety. Also I thought your next steps were fantastic and very sensible, I would love to hear your future thoughts on all of those topics.
On your next steps, picking up on:
I was thinking of something similar to compare: bio risk, AI risk, and unknown unknow risks. However I was thinking if I was putting time into this I would not focus solely on understanding the likelihood estimates but would look for a broad range of evidence. E.g. on AI and bio could compare the risks by looking at: what are the limitations on what AI/bio systems are able to do, what do experts in this field think of the risks, are there good historical analogues for each risk type, how convincing are the case studies of the best things people are doing to prevent risk from AI/bio, how does the topic look on a scale neglectedness tractability comparison, etc, etc.
Anyway just my thoughts on this research topic. Do reach out if you dive into that direction and want to discuss more.
Thanks! And thank you for the research pointers.