I might’ve slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.
The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN’s Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage “rogue nations” (Iran) from developing nukes.
That being said, I don’t think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war. Also, it’s not clear that my performance on my contribution to the contract actually increased the strength of the deterrent to Iran. However, if (a descendant of) my model ends up being used by NATO, perhaps I helped out by decreasing the chance of a false positive.
Disclaimer: This was before I had ever heard of EA. Still, I’ve always been somewhat EA-minded, so maybe you can attribute this to proto-EA reasoning. When I was working on the project, I remember telling myself that even a very small reduction in the odds of a nuclear war happening meant a lot for the future of mankind.
That being said, I don’t think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war.
I wouldn’t sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.
I might’ve slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.
The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN’s Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage “rogue nations” (Iran) from developing nukes.
That being said, I don’t think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war. Also, it’s not clear that my performance on my contribution to the contract actually increased the strength of the deterrent to Iran. However, if (a descendant of) my model ends up being used by NATO, perhaps I helped out by decreasing the chance of a false positive.
Disclaimer: This was before I had ever heard of EA. Still, I’ve always been somewhat EA-minded, so maybe you can attribute this to proto-EA reasoning. When I was working on the project, I remember telling myself that even a very small reduction in the odds of a nuclear war happening meant a lot for the future of mankind.
I wouldn’t sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.
Thank you for you work!