@Wei Dai, I understand that your plan A is an AI pause (+ human intelligence enhancement). And I agree with you that this is the best course of action. Nonetheless, I’m interested in what you see as plan B: If we don’t get an AI pause, is there any version of ‘hand off these problems to AIs’/ ‘let ‘er rip’ that you feel optimistic about? or which you at least think will result in lower p(catastrophe) than other versions? If you have $1B to spend on AI labour during crunch time, what do you get the AIs to work on?
The answer would depend a lot on what the alignment/capabilities profile of the AI is. But one recent update I’ve made is that humans are really terrible at strategy (in addition to philosophy) so if there was no way to pause AI, it would help a lot to get good strategic advice from AI during crunch time, which implies that maybe AI strategic competence > AI philosophical competence in importance (subject to all the usual disclaimers like dual use and how to trust or verify its answers). My latest LW post has a bit more about this.
(By “strategy” here I especially mean “grand strategy” or strategy at the highest levels, which seems more likely to be neglected versus “operational strategy” or strategy involved in accomplishing concrete tasks, which AI companies are likely to prioritize by default.)
So for example if we had an AI that’s highly competent at answering strategic questions, we could ask it “What questions should I be asking you, or what else should I be doing with my $1B?” (but this may have to be modified based on things like how much can we trust its answers of various kinds, how good is it at understanding my values/constraints/philosophies, etc.).
If we do manage to get good and trustworthy AI advice his way, another problem would be how to get key decision makers (including the public) to see and trust such answers, as they wouldn’t necessarily think to ask such questions themselves nor by default trust the AI answers. But that’s another thing that a strategically competent AI could help with.
BTW your comment made me realize that it’s plausible that AI could accelerate strategic thinking and philosophical progress much more relative to science and technology, because the latter could become bottlenecked on feedback from reality (e.g., waiting for experimental results) whereas the former seemingly wouldn’t be. I’m not sure what implications this has, but want to write it down somewhere.
Moreover, above we were comparing AIs to the best human philosophers / to a well-organised long reflection, but the actual humans calling the shots are far below that bar. For instance, I’d say that today’s Claude has better philosophical reasoning and better starting values than the US president, or Elon Musk, or the general public. All in all, best to hand off philosophical thinking to AIs.
One thought I have here is that AIs could give very different answers to different people. Do we have any idea what kind of answers Grok is (or will be) giving to Elon Musk when it comes to philosophy?
The answer would depend a lot on what the alignment/capabilities profile of the AI is. But one recent update I’ve made is that humans are really terrible at strategy (in addition to philosophy) so if there was no way to pause AI, it would help a lot to get good strategic advice from AI during crunch time, which implies that maybe AI strategic competence > AI philosophical competence in importance (subject to all the usual disclaimers like dual use and how to trust or verify its answers). My latest LW post has a bit more about this.
(By “strategy” here I especially mean “grand strategy” or strategy at the highest levels, which seems more likely to be neglected versus “operational strategy” or strategy involved in accomplishing concrete tasks, which AI companies are likely to prioritize by default.)
So for example if we had an AI that’s highly competent at answering strategic questions, we could ask it “What questions should I be asking you, or what else should I be doing with my $1B?” (but this may have to be modified based on things like how much can we trust its answers of various kinds, how good is it at understanding my values/constraints/philosophies, etc.).
If we do manage to get good and trustworthy AI advice his way, another problem would be how to get key decision makers (including the public) to see and trust such answers, as they wouldn’t necessarily think to ask such questions themselves nor by default trust the AI answers. But that’s another thing that a strategically competent AI could help with.
BTW your comment made me realize that it’s plausible that AI could accelerate strategic thinking and philosophical progress much more relative to science and technology, because the latter could become bottlenecked on feedback from reality (e.g., waiting for experimental results) whereas the former seemingly wouldn’t be. I’m not sure what implications this has, but want to write it down somewhere.
One thought I have here is that AIs could give very different answers to different people. Do we have any idea what kind of answers Grok is (or will be) giving to Elon Musk when it comes to philosophy?