Nice post, found this pretty well written and convincing (though I already shared the bottom line, just less firmly).
Random thoughts:
A severe extreme geopolitical tail event, such as a great power conflict between the US and China, may occur.
What type of great power conflict do you have in mind here? “Extreme tail event” makes it sound like you’re thinking of a fairly large scale war, but great power conflict seems to refer to any military confrontation. E.g. I haven’t at all wrapped my head around a military confrontation between China and the US over Taiwan yet, and Metaculus is at ~20% for
Also, I wonder if you have considered any potential craziness that happens after conditional on development of TAI before 2030. E.g. say TAI is developed in 2027, maybe the plausible set of scenarios for 2028 and 2029 include sufficiently many scenarios where we see a >50% decrease in AI funding such that I might want to increase your bottom line forecast?
(Uncertain) My guess would be that a global conflict would increase AI investment considerably, as (I think) R&D typically increases in war times. And AI may turn out to be particularly strategically relevant.
Agreed looking historically as well there’s every reason to think that war is more likely to accellerate technology development. In this case as well alignment focus is likely to disappear completely if there is a serious war.
Dem drones will be unleashed with the most advanced AI software, safety be damned.
What type of great power conflict do you have in mind here? “Extreme tail event” makes it sound like you’re thinking of a fairly large scale war, but great power conflict seems to refer to any military confrontation. E.g. I haven’t at all wrapped my head around a military confrontation between China and the US over Taiwan yet, and Metaculus is at ~20% for
Yeah that’s an interesting question. I guess what I had in mind here was the US and China basically destroying each others’ fabs or something along those lines (a compute shortage would make investment in AI labs less profitable, perhaps). But even that could increase investment as they strive to rebuild capacities? Maybe the extreme tail event that’d cause this is perpetual world peace happening!
Nice post, found this pretty well written and convincing (though I already shared the bottom line, just less firmly).
Random thoughts:
What type of great power conflict do you have in mind here? “Extreme tail event” makes it sound like you’re thinking of a fairly large scale war, but great power conflict seems to refer to any military confrontation. E.g. I haven’t at all wrapped my head around a military confrontation between China and the US over Taiwan yet, and Metaculus is at ~20% for
Also, I wonder if you have considered any potential craziness that happens after conditional on development of TAI before 2030. E.g. say TAI is developed in 2027, maybe the plausible set of scenarios for 2028 and 2029 include sufficiently many scenarios where we see a >50% decrease in AI funding such that I might want to increase your bottom line forecast?
(Uncertain) My guess would be that a global conflict would increase AI investment considerably, as (I think) R&D typically increases in war times. And AI may turn out to be particularly strategically relevant.
Agreed looking historically as well there’s every reason to think that war is more likely to accellerate technology development. In this case as well alignment focus is likely to disappear completely if there is a serious war.
Dem drones will be unleashed with the most advanced AI software, safety be damned.
Yeah that’s an interesting question. I guess what I had in mind here was the US and China basically destroying each others’ fabs or something along those lines (a compute shortage would make investment in AI labs less profitable, perhaps). But even that could increase investment as they strive to rebuild capacities? Maybe the extreme tail event that’d cause this is perpetual world peace happening!