Thanks, spelling these kind of things out is what I was trying to get at, this could make the case stronger working through them.
I don’t have time to go through these points here one by one, but I think the one thing I would point out is that this strategy should be risk-reducing in those cases where the risk is real, i.e. not arguing from current public opinion etc.
I.e. in the worlds where we have the buy-in and commercial interest to scale up AI that much that it will meaningfully matter for electricity demand, I think in those worlds climate advocates will be side-lined. Essentially, I buy the Shulmanerian point that if the prize from AGI is observably really large then things that look inhibiting now—like NIMBYism and environmentalists—will not matter as much as one would think if one extrapolated from current political economy.
Thanks, spelling these kind of things out is what I was trying to get at, this could make the case stronger working through them.
I don’t have time to go through these points here one by one, but I think the one thing I would point out is that this strategy should be risk-reducing in those cases where the risk is real, i.e. not arguing from current public opinion etc.
I.e. in the worlds where we have the buy-in and commercial interest to scale up AI that much that it will meaningfully matter for electricity demand, I think in those worlds climate advocates will be side-lined. Essentially, I buy the Shulmanerian point that if the prize from AGI is observably really large then things that look inhibiting now—like NIMBYism and environmentalists—will not matter as much as one would think if one extrapolated from current political economy.