I think an obvious risk to this strategy is that it would further polarize AI risk discourse and make it more partisan, given how strongly the climate movement is aligned with Democrats.
I think pro-AI forces can reasonably claim that the long-term impacts of accelerated AI development are good for climate—increased tech acceleration & expanded industrial capacity to build clean energy faster—so I think the core substantive argument is actually quite weak and transparently so (I think one needs to have weird assumptions if one really believes short-term emissions from getting to AGI would matter from a climate perspective—e.g. if you believed the US would need to double emissions for a decade to get to AGI you would probably still want to bear that cost given how much easier it would make global decarbonization, even if you only looked at it from a climate maximalist lens).
If one looks at how national security / competitiveness considerations regularly trump climate considerations and this was true even in a time that was more climate-focused than the next couple of years, then it seems hard to imagine this would really constrain things—I find it very hard to imagine a situation where a significant part of US policy makers decide they really need to get behind accelerating AGI, but then they don’t do it because some climate activists protest this.
So, to me, it seems like a very risky strategy with limited upside, but plenty of downside in terms of further polarization and calling a bluff on what is ultimately an easy-to-disarm argument.
1. Yeah I think this is a fair point. However, my understanding is that climate action is reasonably popular with the public—even in the US (https://ourworldindata.org/climate-change-support). It’s only really when it comes to taking action that the parties differ. So if you advocated for restrictions on large training runs for climate reasons I’m not sure it is obvious that it would necessarily have a downside risk, only that you might get more upside benefits with a democratic administration.
2. Yes, I think the argument doesn’t make sense if you believe large training runs will be beneficial. Higher emissions seem like a reasonable price to pay for an aligned superintelligence. However, if you think large training runs will result in huge existential risks or otherwise not have upside benefits then that makes them worth avoiding—as the AI slowdown advocacy community argues—and the costs of emissions are clearly not worth paying.
I think in general most people (and policymakers) are not bought into the idea that advanced AI will cause a technological singularity or be otherwise transformative. The point of this strategy would be to get those people (and policymakers) to take a stance on this issue that aligns with AI safety goals without having to be bought into the transformative effects of AI.
So while a “Pro-AI” advocate might have to convince people of the transformative power of AI to make a counter-argument, we as “Anti-AI” advocates would only have to point non-AI affiliated people towards the climate effects of AI without having to “AI pill” the public and policymakers. (PauseAI apparently has looked into this already and has a page which gives a sense of what the strategy in this post might look like in practice (https://pauseai.info/environmental))
3. Yes but the question—as @Stephen McAleese noted—“is whether this indirect approach would be more effective than or at least complementary to a more direct approach that advocates explicit compute limits and communicates risks from misaligned AI.” So yes national security / competitiveness considerations may regularly trump climate considerations, but if they are trumped less than by safety considerations then they’re the better bet. I don’t know what the answer to this is but I don’t think it’s obvious.
Thanks, spelling these kind of things out is what I was trying to get at, this could make the case stronger working through them.
I don’t have time to go through these points here one by one, but I think the one thing I would point out is that this strategy should be risk-reducing in those cases where the risk is real, i.e. not arguing from current public opinion etc.
I.e. in the worlds where we have the buy-in and commercial interest to scale up AI that much that it will meaningfully matter for electricity demand, I think in those worlds climate advocates will be side-lined. Essentially, I buy the Shulmanerian point that if the prize from AGI is observably really large then things that look inhibiting now—like NIMBYism and environmentalists—will not matter as much as one would think if one extrapolated from current political economy.
I disagree with the substance, but I don’t understand why it gets downvoted.
I would be curious to hear what the nature of your disagreement is : )
Something like this:
I think an obvious risk to this strategy is that it would further polarize AI risk discourse and make it more partisan, given how strongly the climate movement is aligned with Democrats.
I think pro-AI forces can reasonably claim that the long-term impacts of accelerated AI development are good for climate—increased tech acceleration & expanded industrial capacity to build clean energy faster—so I think the core substantive argument is actually quite weak and transparently so (I think one needs to have weird assumptions if one really believes short-term emissions from getting to AGI would matter from a climate perspective—e.g. if you believed the US would need to double emissions for a decade to get to AGI you would probably still want to bear that cost given how much easier it would make global decarbonization, even if you only looked at it from a climate maximalist lens).
If one looks at how national security / competitiveness considerations regularly trump climate considerations and this was true even in a time that was more climate-focused than the next couple of years, then it seems hard to imagine this would really constrain things—I find it very hard to imagine a situation where a significant part of US policy makers decide they really need to get behind accelerating AGI, but then they don’t do it because some climate activists protest this.
So, to me, it seems like a very risky strategy with limited upside, but plenty of downside in terms of further polarization and calling a bluff on what is ultimately an easy-to-disarm argument.
Thanks for the details of your disagreement : )
1. Yeah I think this is a fair point. However, my understanding is that climate action is reasonably popular with the public—even in the US (https://ourworldindata.org/climate-change-support). It’s only really when it comes to taking action that the parties differ. So if you advocated for restrictions on large training runs for climate reasons I’m not sure it is obvious that it would necessarily have a downside risk, only that you might get more upside benefits with a democratic administration.
2. Yes, I think the argument doesn’t make sense if you believe large training runs will be beneficial. Higher emissions seem like a reasonable price to pay for an aligned superintelligence. However, if you think large training runs will result in huge existential risks or otherwise not have upside benefits then that makes them worth avoiding—as the AI slowdown advocacy community argues—and the costs of emissions are clearly not worth paying.
I think in general most people (and policymakers) are not bought into the idea that advanced AI will cause a technological singularity or be otherwise transformative. The point of this strategy would be to get those people (and policymakers) to take a stance on this issue that aligns with AI safety goals without having to be bought into the transformative effects of AI.
So while a “Pro-AI” advocate might have to convince people of the transformative power of AI to make a counter-argument, we as “Anti-AI” advocates would only have to point non-AI affiliated people towards the climate effects of AI without having to “AI pill” the public and policymakers. (PauseAI apparently has looked into this already and has a page which gives a sense of what the strategy in this post might look like in practice (https://pauseai.info/environmental))
3. Yes but the question—as @Stephen McAleese noted—“is whether this indirect approach would be more effective than or at least complementary to a more direct approach that advocates explicit compute limits and communicates risks from misaligned AI.” So yes national security / competitiveness considerations may regularly trump climate considerations, but if they are trumped less than by safety considerations then they’re the better bet. I don’t know what the answer to this is but I don’t think it’s obvious.
Thanks, spelling these kind of things out is what I was trying to get at, this could make the case stronger working through them.
I don’t have time to go through these points here one by one, but I think the one thing I would point out is that this strategy should be risk-reducing in those cases where the risk is real, i.e. not arguing from current public opinion etc.
I.e. in the worlds where we have the buy-in and commercial interest to scale up AI that much that it will meaningfully matter for electricity demand, I think in those worlds climate advocates will be side-lined. Essentially, I buy the Shulmanerian point that if the prize from AGI is observably really large then things that look inhibiting now—like NIMBYism and environmentalists—will not matter as much as one would think if one extrapolated from current political economy.