Honestly I just wrote a list of potential x-risks to make a similar reference class. It wasn’t mean to be a specific claim, just examples for the quick take!
I guess climate change might be less of an existential risk in an of itself (per Halstead), but there might be interplays between them that increase their combined risk (I think Ord talks about this in the precipice). I’m also sympathetic to Luke Kemp’s view that we should really just care about overall x-risk, regardless of cause area, as extinction by any means would be as bad for humanities potential.[1]
I think it’s plausible to consider x-risk from AI higher than Climate Change over the rest of this century, but my position at the moment is that this would be more like 5% v 1% or 1% v 0.01% than 90% v 0.001%, but as I said I’m not sure trying to put precise probability estimates is that useful.
Definitely accept the general point that it’d be good to be more specific with this language in a front-page post though.
My point is that even though AI emits some amount of carbon gases, I’m struggling to find a scenario where it’s a major issue for global warming as AI can help provide solutions here as well.
(Oh, my point wasn’t that climate change couldn’t be an x-risk, though it has been disputed, more that I don’t see the pathway for AI to exacerbate climate change).
“AI may plausibly increase other x-risks (e.g. from Nuclear War, biosecurity, Climate Change etc.)”
I’m extremely surprised to see climate change listed here. Could you explain?
Honestly I just wrote a list of potential x-risks to make a similar reference class. It wasn’t mean to be a specific claim, just examples for the quick take!
I guess climate change might be less of an existential risk in an of itself (per Halstead), but there might be interplays between them that increase their combined risk (I think Ord talks about this in the precipice). I’m also sympathetic to Luke Kemp’s view that we should really just care about overall x-risk, regardless of cause area, as extinction by any means would be as bad for humanities potential.[1]
I think it’s plausible to consider x-risk from AI higher than Climate Change over the rest of this century, but my position at the moment is that this would be more like 5% v 1% or 1% v 0.01% than 90% v 0.001%, but as I said I’m not sure trying to put precise probability estimates is that useful.
Definitely accept the general point that it’d be good to be more specific with this language in a front-page post though.
Though not necessarily present, some extinctions may well be a lot worse than others there
My point is that even though AI emits some amount of carbon gases, I’m struggling to find a scenario where it’s a major issue for global warming as AI can help provide solutions here as well.
(Oh, my point wasn’t that climate change couldn’t be an x-risk, though it has been disputed, more that I don’t see the pathway for AI to exacerbate climate change).
I would take the proposal to be AI->growth->climate change or other negative growth side effects