Executive summary: The post discusses three selection effects biasing AI risk discourse: overvaluing outside views, filtering arguments for safety, and pursuing useless research based on confusion.
Key points:
Overreliance on outside views like consensus opinions double counts evidence and feels safer than developing independent expertise.
Strong arguments for high extinction risk often look unsafe to share, so discourse misses hazardous insights.
Confusions about core issues lead researchers down useless paths instead of focusing on decisive factors.
Checking whether a question is coherent or helps save the world can avoid wasted effort.
Tabooing terms like AGI may help avoid distraction on irrelevant definitional debates.
Recognizing these selection effects can improve individual and collective epistemics.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The post discusses three selection effects biasing AI risk discourse: overvaluing outside views, filtering arguments for safety, and pursuing useless research based on confusion.
Key points:
Overreliance on outside views like consensus opinions double counts evidence and feels safer than developing independent expertise.
Strong arguments for high extinction risk often look unsafe to share, so discourse misses hazardous insights.
Confusions about core issues lead researchers down useless paths instead of focusing on decisive factors.
Checking whether a question is coherent or helps save the world can avoid wasted effort.
Tabooing terms like AGI may help avoid distraction on irrelevant definitional debates.
Recognizing these selection effects can improve individual and collective epistemics.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.