Re: bioweapons convention: Good point, so maybe not as straightforward as I described.
Re: predicting AI: You can always not publish the research you are doing or only inform safety-focused institutions about it. I agree that there are some possible downsides to knowing more precisely when AI will be developed, but there seem to be much worse downsides to not knowing when AI will be developed (mainly that nobody is preparing for it policy- and coordination-wise)
I think the biggest risk is getting governments too excited about AI. So I’m actually not super confident that any work on this is 10x more likely to be positive.
Re: policy & alignment: I’m very confident, that there is some form of alignment work that is not speeding up capabilities, especially the more abstract one. Though I agree on interpretability. On policy, I would also be surprised if every avenue of governance was as risky as you describe. Especially laying out big picture strategies and monitoring AI development seem pretty low-risk.
Overall, I think you have done a good job scrutinizing my claims and I’m much less confident now. Still, I’d be really surprised if every type of longtermist work was as risky as your examples—especially for someone as safety-conscious as you are. (Actually, one very positive thing might be criticizing different approaches and showing their downsides)
I think you’re basically right in your points, but they are not enough to say that climate change is nearly as bad as biorisk or AI misalignment. You may get close to nuclear risk, but I’m skeptical of that as well. My main point is that extinction from climate is much more speculative than from the other causes.
Reasons:
There is some risk of a runaway climate change. However, this risk seems small according to GWWC’s article + it would be overconfident to say that humanity can’t protect itself against it with future technology. There is also much more time left until we get to > 5° of warming than until the risk of engineered pathogens and powerful AI rises quickly.
Climate change will be very destabilizing. However, it’s very hard to predict the long-term consequences of this, so if you’re motivated by a longtermist framework, you should focus on tackling the more plausible risks of engineered pathogens and misaligned AI more directly. One caveat here is the perspective of cascading risks, which EA is not taking very seriously at the moment.
The impacts on life quality are not convincing from a longtermist standpoint as I expect them to last much less than 1000 years whereas humanity and its descendants could live for billions of years. I also expect only a tiny fraction of future sentiences to live on earth.
Another thought I often miss in debates on x-risk from climate change is that humans would likely intervene in climate at some stage if it’s a serious threat to our economies and even lives. I haven’t seen anyone make this point before, but please point me to sources.
If you are still new to EA, you may understand the current position better as you learn more about the pressingness of biorisk and especially AI risk. That said, there is probably room for some funding for climate change from a longtermist perspective, and given the uncertainty surrounding cascading risks, I’d be happy to see a small fraction of longtermist resources directed to this problem.