This only holds if the future value in the universe of AIs that took over is almost exactly the same as the future value if humans remained in control (meaning varying less than one part in a billion (and I think less than one part in a billion billion billion billion billion billion))
Your calculation implicitly assumes that preventing AI takeover permanently secures human control over the universe for billions of years. In other words, you are treating the choice as one between two possible futures: a universe entirely colonized by humans versus a universe entirely colonized by AI. That assumption is what produces the enormous numbers in your estimate.
But, in my view, there is a more realistic way to model this. If preventing AI takeover today does not permanently secure human control over the universe, but instead merely delays the eventual loss of human control, then the actual effect of prevention is much smaller than your calculation suggests. Instead of the relevant outcome being the difference between a human-controlled universe and an AI-controlled universe over billions of years, the relevant outcome is extending human control over Earth for some additional period of time before control is eventually lost anyway. That period of time, however long it might be in human terms, is presumably extremely brief by astronomical standards.
When you model the situation this way, the numbers change dramatically. The expected value of preventing AI takeover drops by orders of magnitude compared to your original estimate, which directly undercuts the argument you are making.
I personally do think the probability of eventual disempowerment is high. However, you are implying that it is 100%. If it is 99%, or indeed even 99.9999999%, and one thinks the value of the future is significantly higher with humanity (not necessarily biological humans) in control vs AI, then there are still astronomical stakes of humanity remaining in control.
Your calculation implicitly assumes that preventing AI takeover permanently secures human control over the universe for billions of years. In other words, you are treating the choice as one between two possible futures: a universe entirely colonized by humans versus a universe entirely colonized by AI. That assumption is what produces the enormous numbers in your estimate.
But, in my view, there is a more realistic way to model this. If preventing AI takeover today does not permanently secure human control over the universe, but instead merely delays the eventual loss of human control, then the actual effect of prevention is much smaller than your calculation suggests. Instead of the relevant outcome being the difference between a human-controlled universe and an AI-controlled universe over billions of years, the relevant outcome is extending human control over Earth for some additional period of time before control is eventually lost anyway. That period of time, however long it might be in human terms, is presumably extremely brief by astronomical standards.
When you model the situation this way, the numbers change dramatically. The expected value of preventing AI takeover drops by orders of magnitude compared to your original estimate, which directly undercuts the argument you are making.
I personally do think the probability of eventual disempowerment is high. However, you are implying that it is 100%. If it is 99%, or indeed even 99.9999999%, and one thinks the value of the future is significantly higher with humanity (not necessarily biological humans) in control vs AI, then there are still astronomical stakes of humanity remaining in control.