I think the world either ends or some other form of (implied permanent) x-risk in the next 100 years or it doesn’t. And if the world doesn’t end in the next 100 years, we eventually will either a) settle the stars or b) ends or drastically curtails at some point >100 years out.
I guess I assume b) is pretty low probability with AI, like much less than 99% chance. And 2 orders of magnitude isn’t much when all the other numbers are pretty fuzzy and spans that many orders of magnitude.
No, weaker claim than that, just saying that P(we spread to the stars|we don’t all die or are otherwise curtailed from AI in the next 100 years) > 1%.
(I should figure out my actual probabilities on AI and existential risk with at least moderate rigor at some point, but I’ve never actually done this so far).
Thanks. Going back to your original impact estimate, I think the bigger difficulty I have in swallowing your impact estimate and claims related to it (e.g. “the ultimate weight of small decisions you make is measured not in dollars or relative status, but in stars”) is not the probabilities of AI or space expansion, but what seems to me to be a pretty big jump from the potential stakes of a cause area or value possible in the future without any existential catastrophes, to the impact that researchers working on that cause area might have.
Can you be less abstract and point, quantitatively, to which numbers I gave seem vastly off to you and insert your own numbers? I definitely think my numbers are pretty fuzzy but I’d like to see different ones before just arguing verbally instead.
(Also I think my actual original argument was a conditional claim, so it feels a little bit weird to be challenged on the premises of them! :)).
I think the world either ends or some other form of (implied permanent) x-risk in the next 100 years or it doesn’t. And if the world doesn’t end in the next 100 years, we eventually will either a) settle the stars or b) ends or drastically curtails at some point >100 years out.
I guess I assume b) is pretty low probability with AI, like much less than 99% chance. And 2 orders of magnitude isn’t much when all the other numbers are pretty fuzzy and spans that many orders of magnitude.
(A lot of this is pretty fuzzy).
So is the basic idea that transformative AI not ending in an existential catastrophe is the major bottleneck on a vastly positive future for humanity?
No, weaker claim than that, just saying that P(we spread to the stars|we don’t all die or are otherwise curtailed from AI in the next 100 years) > 1%.
(I should figure out my actual probabilities on AI and existential risk with at least moderate rigor at some point, but I’ve never actually done this so far).
Thanks. Going back to your original impact estimate, I think the bigger difficulty I have in swallowing your impact estimate and claims related to it (e.g. “the ultimate weight of small decisions you make is measured not in dollars or relative status, but in stars”) is not the probabilities of AI or space expansion, but what seems to me to be a pretty big jump from the potential stakes of a cause area or value possible in the future without any existential catastrophes, to the impact that researchers working on that cause area might have.
Can you be less abstract and point, quantitatively, to which numbers I gave seem vastly off to you and insert your own numbers? I definitely think my numbers are pretty fuzzy but I’d like to see different ones before just arguing verbally instead.
(Also I think my actual original argument was a conditional claim, so it feels a little bit weird to be challenged on the premises of them! :)).