I’ve never heard this idea proposed before so it seems novel and interesting.
As you say in the post, the AI risk movement could gain much more awareness by associating itself with the climate risk advocacy movement which is much larger. Compute is arguably the main driver of AI progress, compute is correlated with energy usage, and energy use generally increases carbon emissions so limiting carbon emissions from AI is an indirect way of limiting the compute dedicated to AI and slowing down the AI capabilities race.
This approach seems viable in the near future until innovations in energy technology (e.g. nuclear fusion) weaken the link between energy production and CO2 emissions, or algorithmic progress reduces the need for massive amounts of compute for AI.
The question is whether this indirect approach would be more effective than or at least complementary to a more direct approach that advocates explicit compute limits and communicates risks from misaligned AI.
I’ve never heard this idea proposed before so it seems novel and interesting.
As you say in the post, the AI risk movement could gain much more awareness by associating itself with the climate risk advocacy movement which is much larger. Compute is arguably the main driver of AI progress, compute is correlated with energy usage, and energy use generally increases carbon emissions so limiting carbon emissions from AI is an indirect way of limiting the compute dedicated to AI and slowing down the AI capabilities race.
This approach seems viable in the near future until innovations in energy technology (e.g. nuclear fusion) weaken the link between energy production and CO2 emissions, or algorithmic progress reduces the need for massive amounts of compute for AI.
The question is whether this indirect approach would be more effective than or at least complementary to a more direct approach that advocates explicit compute limits and communicates risks from misaligned AI.