Briefly avoid using the word “intelligence.” Can you point to a specific capability that is a) minimally necessary for human created machines to cause takeover or extinction and b) have a significant probability of happening 500 years from now, but 0% by 2043?
I’m worried that you’re trying to have a semantic debate while people like FTX Future Fund are trying to have a substantive one (which directly informs the cause prioritization of their grantmaking).
Thanks for the comment. You are right, I forgot to put the word “intelligent” into quotes. Because as Turing already pointed out, it is not a well-defined term. So I am using it somewhat colloquially.
But I do take issue with your second point. First of all, there is nothing insubstantial about semantics. Semantics is meaning, and meaning is everything. So my point is that current AI is not building models that are capable of carrying out the full range of human cognition that is necessary for planning, creativity, etc. This is exactly what they want to know, I think—because this is what is needed for “AGI”. But this question is to a large extent orthogonal to when they will have capability to destroy us. They already do. Hook up a DL based decision system to the nuclear arsenal and sit back. In terms of the FTX future fund, I just want to point out that they have an entire section devoted to “semantics”: “What do you mean by AGI?”
The ML described is basically pattern recognition.
Maybe, really good pattern recognition could produce a complete set of rules and logic. But it’s complex and unclear what the above means.
You think AI are tools and can’t have the capabilities that produce X-risk. Instead of investigating this, you pack this belief into definition of the word “symbolic” and seize on people not fully engaging with this concept. Untangling this with you seems laborious and unpromising.
I don’t really understand your comment, but I’d like to point out that I didn’t invent the “symbolic” idea. Leading people on both sides (LeCun, Bengio, Marcus) agree that some form of symbolic reasoning is necessary. It IS a very complex problem I agree, and I encourage everyone to engage with it as the top researchers in the field already have.
Briefly avoid using the word “intelligence.” Can you point to a specific capability that is a) minimally necessary for human created machines to cause takeover or extinction and b) have a significant probability of happening 500 years from now, but 0% by 2043?
I’m worried that you’re trying to have a semantic debate while people like FTX Future Fund are trying to have a substantive one (which directly informs the cause prioritization of their grantmaking).
Thanks for the comment. You are right, I forgot to put the word “intelligent” into quotes. Because as Turing already pointed out, it is not a well-defined term. So I am using it somewhat colloquially.
But I do take issue with your second point. First of all, there is nothing insubstantial about semantics. Semantics is meaning, and meaning is everything. So my point is that current AI is not building models that are capable of carrying out the full range of human cognition that is necessary for planning, creativity, etc. This is exactly what they want to know, I think—because this is what is needed for “AGI”. But this question is to a large extent orthogonal to when they will have capability to destroy us. They already do. Hook up a DL based decision system to the nuclear arsenal and sit back. In terms of the FTX future fund, I just want to point out that they have an entire section devoted to “semantics”: “What do you mean by AGI?”
The ML described is basically pattern recognition.
Maybe, really good pattern recognition could produce a complete set of rules and logic. But it’s complex and unclear what the above means.
You think AI are tools and can’t have the capabilities that produce X-risk. Instead of investigating this, you pack this belief into definition of the word “symbolic” and seize on people not fully engaging with this concept. Untangling this with you seems laborious and unpromising.
I don’t really understand your comment, but I’d like to point out that I didn’t invent the “symbolic” idea. Leading people on both sides (LeCun, Bengio, Marcus) agree that some form of symbolic reasoning is necessary. It IS a very complex problem I agree, and I encourage everyone to engage with it as the top researchers in the field already have.