What does it mean for something to be an optimizer?
Expected utility maximization seems to fully cover this. More general models aren’t particularly useful to saving the world.
For what it’s worth, I have significant disagreements with basically all of your short replies to these basic questions, and I’ve been heavily engaged in AI alignment discussions for several years. So, I strongly disagree with your claim that these questions are “either already solved or there’s a good reason why thinking about them is not useful to the solution”, at least in the way you seem to think they have been solved.
I feel like they’re at least solved-enough that they’re not particularly what should be getting focused on. I predict that in worlds where we survive, spending time on those question doesn’t end up having cashed out to much value.
Executive summary: The post discusses three selection effects biasing AI risk discourse: overvaluing outside views, filtering arguments for safety, and pursuing useless research based on confusion.
Key points:
Overreliance on outside views like consensus opinions double counts evidence and feels safer than developing independent expertise.
Strong arguments for high extinction risk often look unsafe to share, so discourse misses hazardous insights.
Confusions about core issues lead researchers down useless paths instead of focusing on decisive factors.
Checking whether a question is coherent or helps save the world can avoid wasted effort.
Tabooing terms like AGI may help avoid distraction on irrelevant definitional debates.
Recognizing these selection effects can improve individual and collective epistemics.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
For what it’s worth, I have significant disagreements with basically all of your short replies to these basic questions, and I’ve been heavily engaged in AI alignment discussions for several years. So, I strongly disagree with your claim that these questions are “either already solved or there’s a good reason why thinking about them is not useful to the solution”, at least in the way you seem to think they have been solved.
I feel like they’re at least solved-enough that they’re not particularly what should be getting focused on. I predict that in worlds where we survive, spending time on those question doesn’t end up having cashed out to much value.
Executive summary: The post discusses three selection effects biasing AI risk discourse: overvaluing outside views, filtering arguments for safety, and pursuing useless research based on confusion.
Key points:
Overreliance on outside views like consensus opinions double counts evidence and feels safer than developing independent expertise.
Strong arguments for high extinction risk often look unsafe to share, so discourse misses hazardous insights.
Confusions about core issues lead researchers down useless paths instead of focusing on decisive factors.
Checking whether a question is coherent or helps save the world can avoid wasted effort.
Tabooing terms like AGI may help avoid distraction on irrelevant definitional debates.
Recognizing these selection effects can improve individual and collective epistemics.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.