The present and past are the only tools we have to think about the future, so I expect the “pre-driven car” model to make more accurate predictions.
They’ll be systematically biased predictions, because AGI will be much smarter than the systems we have now. And it’s dubious that AI should be the only reference class here (as opposed to human brains vis-a-vis animal brains, most notably).
I have not yet found any argument in favour of AI Risk being real that remained convincing after the above translation.
If so, then you won’t find any argument in favor of human risk being real after you translate “free will” to “acting on the basis of social influences and deterministic neurobiology”, and then you will realize that there is nothing to worry about when it comes to terrorism, crime, greed or other problems. (Which is absurd.)
Also, I don’t see how the arguments in favor of AI risk rely on language like this; are you referring to the real writing that explains the issue (e.g. papers from MIRI, or Bostrom’s book) or are you just referring to simple things that people say on forums?
It seems absurd to assign AI-risk less than 0.0000000000000000000000000000001% probability because that would be a lot of zeros.
And, after reading this, you are likely to still underestimate the probability of AI risk, because you’ve anchored yourself at 0.00000000000000000000000000000000000001% and won’t update sufficiently upwards.
Anchoring points everywhere depending on context and it’s infeasible to guess its effect in a general sense.
I’m not sure about your blog post because you are talking about “bits” which nominally means information, not probability, and it confuses me. If you really mean that there is, say, a 1 − 2^(-30) probability of extinction from some cause other than x-risk then your guesses are indescribably unrealistic. Here again, it’s easy to arbitrarily assert “2^(-30)” even if you don’t grasp and justify what that really means.
I used to think pretty much exactly the argument you’re describing, so I don’t think I will change my mind by discussing this with you in detail.
On the other hand, the last sentence of your comment makes me feel that you’re equating my not agreeing with you with my not understanding probability. (I’m talking about my own feelings here, irrespective of what you intended to say.) So, I don’t think I will change your mind by discussing this with you in detail.
I don’t feel motivated to go back and forth on this thread, because I think we will both end up feeling like it was a waste of time. I want to make it clear that I do not say this because I think badly of you.
I will try to clear up the bits you pointed out to be confusing. In the Language section, I am referring to MIRI’s writing, as well as Bostrom’s Superintelligence, as well as most IRL conversations and forum talk I’ve seen. “bits” are an abstraction akin to “log-odds”, I made them up because not every statement in that post is a probabilistic claim in a rigorous sense and the blog post was mostly written for myself. I really do estimate that there is less than 2−170 chance of AI being risky in a way that would lead to extinction, whose risk can be prevented, and moreover that it is possible to make meaningful progress on such prevention within the next 20 years, along with some more qualifiers that I believe to be necessary to support the cause right now.
On the other hand, the last sentence of your comment makes me feel that you’re equating my not agreeing with you with my not understanding probability. (I’m talking about my own feelings here, irrespective of what you intended to say.)
Well, OK. But in my last sentence, I wasn’t talking about the use of information terminology to refer to probabilities. I’m saying I don’t think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, but getting into anything else just seems fruitless when your initial priors are so far out there (and when you also tell people that you don’t expect to be persuaded anyway).
They’ll be systematically biased predictions, because AGI will be much smarter than the systems we have now. And it’s dubious that AI should be the only reference class here (as opposed to human brains vis-a-vis animal brains, most notably).
If so, then you won’t find any argument in favor of human risk being real after you translate “free will” to “acting on the basis of social influences and deterministic neurobiology”, and then you will realize that there is nothing to worry about when it comes to terrorism, crime, greed or other problems. (Which is absurd.)
Also, I don’t see how the arguments in favor of AI risk rely on language like this; are you referring to the real writing that explains the issue (e.g. papers from MIRI, or Bostrom’s book) or are you just referring to simple things that people say on forums?
The reality is actually the reverse: people are prone to assert arbitrarily low probabilities because it’s easy, but justifying a model with such a low probability is not. See: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
And, after reading this, you are likely to still underestimate the probability of AI risk, because you’ve anchored yourself at 0.00000000000000000000000000000000000001% and won’t update sufficiently upwards.
Anchoring points everywhere depending on context and it’s infeasible to guess its effect in a general sense.
I’m not sure about your blog post because you are talking about “bits” which nominally means information, not probability, and it confuses me. If you really mean that there is, say, a 1 − 2^(-30) probability of extinction from some cause other than x-risk then your guesses are indescribably unrealistic. Here again, it’s easy to arbitrarily assert “2^(-30)” even if you don’t grasp and justify what that really means.
I used to think pretty much exactly the argument you’re describing, so I don’t think I will change my mind by discussing this with you in detail.
On the other hand, the last sentence of your comment makes me feel that you’re equating my not agreeing with you with my not understanding probability. (I’m talking about my own feelings here, irrespective of what you intended to say.) So, I don’t think I will change your mind by discussing this with you in detail.
I don’t feel motivated to go back and forth on this thread, because I think we will both end up feeling like it was a waste of time. I want to make it clear that I do not say this because I think badly of you.
I will try to clear up the bits you pointed out to be confusing. In the Language section, I am referring to MIRI’s writing, as well as Bostrom’s Superintelligence, as well as most IRL conversations and forum talk I’ve seen. “bits” are an abstraction akin to “log-odds”, I made them up because not every statement in that post is a probabilistic claim in a rigorous sense and the blog post was mostly written for myself. I really do estimate that there is less than 2−170 chance of AI being risky in a way that would lead to extinction, whose risk can be prevented, and moreover that it is possible to make meaningful progress on such prevention within the next 20 years, along with some more qualifiers that I believe to be necessary to support the cause right now.
Well, OK. But in my last sentence, I wasn’t talking about the use of information terminology to refer to probabilities. I’m saying I don’t think you have an intuitive grasp of just how mind-bogglingly unlikely a probability like 2^(-30) is. There are other arguments to be made on the math here, but getting into anything else just seems fruitless when your initial priors are so far out there (and when you also tell people that you don’t expect to be persuaded anyway).