I’ve decided to curate this question post because:
It exemplifies a truth seeking approach to critique. The author, Matthew_Barnett, is starting off with a feeling that they disagree with EAs about AI risk, but wants to respond to specific arguments. This is obviously a more time intensive approach to critique than simply writing up your own impressions, but it is likely to lead to more precise arguments, which are easier to learn from.
I hope curating this post will encourage even more helpful responses from Forum users. AI risk is heterogeneous, and discussions around it are constantly changing, so both newcomers and long interested readers can benefit from answers to this post’s question.
I’ve decided to curate this question post because:
It exemplifies a truth seeking approach to critique. The author, Matthew_Barnett, is starting off with a feeling that they disagree with EAs about AI risk, but wants to respond to specific arguments. This is obviously a more time intensive approach to critique than simply writing up your own impressions, but it is likely to lead to more precise arguments, which are easier to learn from.
The comments are particularly helpful. I especially think that this comment from Tom Barnes is likely to help a reader who is also asking “What is the current most representative EA AI x-risk argument?”
I hope curating this post will encourage even more helpful responses from Forum users. AI risk is heterogeneous, and discussions around it are constantly changing, so both newcomers and long interested readers can benefit from answers to this post’s question.