1. Yeah I think this is a fair point. However, my understanding is that climate action is reasonably popular with the publicâeven in the US (https://ââourworldindata.org/ââclimate-change-support). Itâs only really when it comes to taking action that the parties differ. So if you advocated for restrictions on large training runs for climate reasons Iâm not sure it is obvious that it would necessarily have a downside risk, only that you might get more upside benefits with a democratic administration.
2. Yes, I think the argument doesnât make sense if you believe large training runs will be beneficial. Higher emissions seem like a reasonable price to pay for an aligned superintelligence. However, if you think large training runs will result in huge existential risks or otherwise not have upside benefits then that makes them worth avoidingâas the AI slowdown advocacy community arguesâand the costs of emissions are clearly not worth paying.
I think in general most people (and policymakers) are not bought into the idea that advanced AI will cause a technological singularity or be otherwise transformative. The point of this strategy would be to get those people (and policymakers) to take a stance on this issue that aligns with AI safety goals without having to be bought into the transformative effects of AI.
So while a âPro-AIâ advocate might have to convince people of the transformative power of AI to make a counter-argument, we as âAnti-AIâ advocates would only have to point non-AI affiliated people towards the climate effects of AI without having to âAI pillâ the public and policymakers. (PauseAI apparently has looked into this already and has a page which gives a sense of what the strategy in this post might look like in practice (https://ââpauseai.info/ââenvironmental))
3. Yes but the questionâas @Stephen McAleese notedââis whether this indirect approach would be more effective than or at least complementary to a more direct approach that advocates explicit compute limits and communicates risks from misaligned AI.â So yes national security /â competitiveness considerations may regularly trump climate considerations, but if they are trumped less than by safety considerations then theyâre the better bet. I donât know what the answer to this is but I donât think itâs obvious.
Thanks for the details of your disagreement : )
1. Yeah I think this is a fair point. However, my understanding is that climate action is reasonably popular with the publicâeven in the US (https://ââourworldindata.org/ââclimate-change-support). Itâs only really when it comes to taking action that the parties differ. So if you advocated for restrictions on large training runs for climate reasons Iâm not sure it is obvious that it would necessarily have a downside risk, only that you might get more upside benefits with a democratic administration.
2. Yes, I think the argument doesnât make sense if you believe large training runs will be beneficial. Higher emissions seem like a reasonable price to pay for an aligned superintelligence. However, if you think large training runs will result in huge existential risks or otherwise not have upside benefits then that makes them worth avoidingâas the AI slowdown advocacy community arguesâand the costs of emissions are clearly not worth paying.
I think in general most people (and policymakers) are not bought into the idea that advanced AI will cause a technological singularity or be otherwise transformative. The point of this strategy would be to get those people (and policymakers) to take a stance on this issue that aligns with AI safety goals without having to be bought into the transformative effects of AI.
So while a âPro-AIâ advocate might have to convince people of the transformative power of AI to make a counter-argument, we as âAnti-AIâ advocates would only have to point non-AI affiliated people towards the climate effects of AI without having to âAI pillâ the public and policymakers. (PauseAI apparently has looked into this already and has a page which gives a sense of what the strategy in this post might look like in practice (https://ââpauseai.info/ââenvironmental))
3. Yes but the questionâas @Stephen McAleese notedââis whether this indirect approach would be more effective than or at least complementary to a more direct approach that advocates explicit compute limits and communicates risks from misaligned AI.â So yes national security /â competitiveness considerations may regularly trump climate considerations, but if they are trumped less than by safety considerations then theyâre the better bet. I donât know what the answer to this is but I donât think itâs obvious.