What is your p(doom|AGI)? (Assuming AGI is developed in the next decade.)
Note that Bostrom himself says in Astronomical Waste (my emphasis in bold):
However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.[8] Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
I don’t think extra time pre-transformative-AI is particularly valuable except its impact on existential risk
I also think it’s bad how you (and a bunch of other people on the internet) ask this p(doom) question in a way that (in my read of things) is trying to force somebody into a corner of agreeing with you. It doesn’t feel like good faith so much as bullying people into agreeing with you. But that’s just my read of things without much thought. At a gut level I expect we die, my from-the-arguments / inside view is something like 60%, and my “all things considered” view is more like 40% doom.
trying to force somebody into a corner of agreeing with you.
It’s really not. I’m trying to understand where people are coming from. If someone has low p(doom|AGI), then it makes sense that they don’t see pausing AI development as urgent. Or their p(doom) relative to their actions can give some idea of how risk taking they are (but I still don’t understand how OpenAI and their supporters think it’s ok to gamble 100s of millions of lives in expectation for a shot at utopia without any democratic mandate).
I don’t think extra time pre-transformative-AI is particularly valuable except its impact on existential risk
and
“all things considered” view is more like 40% doom.
Surely means that extra time now (pausing) is extremely valuable? i.e. because of its impact on existential risk.
Or do you think that the chance we’re in a net negative world now means that the astronomical future we could save would also most likely be net negative? I don’ think this follows. Or that continuing to allow AI to speed up now will actually prevent extinction threats in the next 10 years that we would otherwise be wiped out by (this seems very unlikely to me).
Sorry, I agree my previous comment was a bit intense. I think I wouldn’t get triggered if you instead asked “I wonder if a crux is that we disagree on the likelihood of existential catastrophe from AGI. I think it’s very likely (>50%), what do you think?”
P(doom) is not why I disagree with you. It feels a little like if I’m arguing with an environmentalist about recycling and they go “wow do you even care about the environment?” Sure, that could be a crux, but in this case it isn’t and the question is asked in a way that is trying to force me to agree with them. I think asking about AGI beliefs is much less bad, but it feels similar.
I think it’s pretty unclear if extra time now positively impacts existential risk. I wrote about a little bit of this here, and many others have discussed similar things. I expect this is the source of our disagreement, but I’m not sure.
What is your p(doom|AGI)? (Assuming AGI is developed in the next decade.)
Note that Bostrom himself says in Astronomical Waste (my emphasis in bold):
I don’t think you read my comment:
I also think it’s bad how you (and a bunch of other people on the internet) ask this p(doom) question in a way that (in my read of things) is trying to force somebody into a corner of agreeing with you. It doesn’t feel like good faith so much as bullying people into agreeing with you. But that’s just my read of things without much thought. At a gut level I expect we die, my from-the-arguments / inside view is something like 60%, and my “all things considered” view is more like 40% doom.
Wow that escalated quickly :(
It’s really not. I’m trying to understand where people are coming from. If someone has low p(doom|AGI), then it makes sense that they don’t see pausing AI development as urgent. Or their p(doom) relative to their actions can give some idea of how risk taking they are (but I still don’t understand how OpenAI and their supporters think it’s ok to gamble 100s of millions of lives in expectation for a shot at utopia without any democratic mandate).
and
Surely means that extra time now (pausing) is extremely valuable? i.e. because of its impact on existential risk.
Or do you think that the chance we’re in a net negative world now means that the astronomical future we could save would also most likely be net negative? I don’ think this follows. Or that continuing to allow AI to speed up now will actually prevent extinction threats in the next 10 years that we would otherwise be wiped out by (this seems very unlikely to me).
Sorry, I agree my previous comment was a bit intense. I think I wouldn’t get triggered if you instead asked “I wonder if a crux is that we disagree on the likelihood of existential catastrophe from AGI. I think it’s very likely (>50%), what do you think?”
P(doom) is not why I disagree with you. It feels a little like if I’m arguing with an environmentalist about recycling and they go “wow do you even care about the environment?” Sure, that could be a crux, but in this case it isn’t and the question is asked in a way that is trying to force me to agree with them. I think asking about AGI beliefs is much less bad, but it feels similar.
I think it’s pretty unclear if extra time now positively impacts existential risk. I wrote about a little bit of this here, and many others have discussed similar things. I expect this is the source of our disagreement, but I’m not sure.