I agree that I’d like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.
I think from where I’m standing, direct, “normal” X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there’s a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models.
One issue regarding a Big List is figuring out what specifically should be proposed. I’d encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)
Why would research on ‘minor’ GCRs like the ones mentioned by Arepo be harder than eg AI alignment?
My impression is that there is plenty of good research on eg effects of CO2 on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.
My point was just that understanding the expected impact seems more challenging. I’d agree that understanding the short-term impacts are much easier of those kinds of things, but it’s tricky to tell how that will impact things 200+ years from now.
I agree that I’d like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.
I think from where I’m standing, direct, “normal” X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there’s a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models.
One issue regarding a Big List is figuring out what specifically should be proposed. I’d encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)
Why would research on ‘minor’ GCRs like the ones mentioned by Arepo be harder than eg AI alignment?
My impression is that there is plenty of good research on eg effects of CO2 on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.
Is the argument that they are less neglected?
My point was just that understanding the expected impact seems more challenging. I’d agree that understanding the short-term impacts are much easier of those kinds of things, but it’s tricky to tell how that will impact things 200+ years from now.
Write a post on which aspect? You mean basically fleshing out the whole comment?
Yes, fleshing out the whole comment, basically.