it would be ideal for you to work on something other than AGI safety!
I disagree. Here is my reasoning:
Many people that have extensive ML knowledge are not working on safety because either they are not convinced of its importance or because they haven’t fully wrestled with the issue
In this post, Ada-Maaria articulated the path to her current beliefs and how current AI safety communication has affected her.
She has done a much more rigorous job of evaluating the pervasiveness of these arguments than anyone else I’ve read
If she continues down this path she could either discover what unstated assumptions the AI safety community has failed to communicate or potentially the actual flaws in the AI safety argument.
This will either make it easier for AI Safety folks to express their opinions or uncover assumptions that need to be verified.
I disagree. Here is my reasoning:
Many people that have extensive ML knowledge are not working on safety because either they are not convinced of its importance or because they haven’t fully wrestled with the issue
In this post, Ada-Maaria articulated the path to her current beliefs and how current AI safety communication has affected her.
She has done a much more rigorous job of evaluating the pervasiveness of these arguments than anyone else I’ve read
If she continues down this path she could either discover what unstated assumptions the AI safety community has failed to communicate or potentially the actual flaws in the AI safety argument.
This will either make it easier for AI Safety folks to express their opinions or uncover assumptions that need to be verified.
Either would be valuable!