Thanks for the post. I generally agree with your arguments but thought I should respond as someone currently doing research on a non-alignment problem. While I want a global pause, I have no idea what I personally can do to help achieve that. Whereas I at least have some idea of actions I can take that might help reduce the “massive increase in inequality/power concentration” problem.
“Solve philosophy” is not the same thing as “implement the correct philosophy”, and we need the AI to bridge that gap. There is a near-consensus among moral philosophers that factory farming is wrong, yet it persists.
This is a great point and I just wanted to call it out. I do think research is most likely to make a difference when it is produced with some thought about implementation—i.e. who the relevant audience is, how to get it to them, whether the actions you are recommending they take are actually within their power, etc.
Thanks for the post. I generally agree with your arguments but thought I should respond as someone currently doing research on a non-alignment problem. While I want a global pause, I have no idea what I personally can do to help achieve that. Whereas I at least have some idea of actions I can take that might help reduce the “massive increase in inequality/power concentration” problem.
This is a great point and I just wanted to call it out. I do think research is most likely to make a difference when it is produced with some thought about implementation—i.e. who the relevant audience is, how to get it to them, whether the actions you are recommending they take are actually within their power, etc.