I definitely disagree with the OP that Mitchell was being “dismissive” for stating her honest belief that near-term AGI is unlikely. This is a completely valid position held by a significant portion of AI researchers.
I didn’t state that. I think Mitchell was “dismissive” (even aggressively so) by calling the view of Tegmark, Bengio, and indirectly Hinton and others “ungrounded speculations”. I have no problem with someone stating that AGI is unlikely within a specific timeframe, even if I think that’s wrong.
I agree with most of what you wrote about the debate, although I don’t think that Mitchell presented any “good” arguments.
intelligent AI would be able to figure out what we wanted
It probably would, but that’s not the point of the alignment problem. The problem is that even if it knows what we “really” want, it won’t care about it unless we find a way to align it with our values, needs, and wishes, which is a very hard problem (if you doubt that, I recommend watching this introduction). We understand pretty well what chickens, pigs, and cows want, but we still treat them very badly.
I think calling their opinions “ungrounded speculation” is an entirely valid opinion, although I would personally use the more diplomatic term “insufficiently grounded speculation”. She acknowledges that they have reasons for their speculation, but does not find those reasons to be sufficiently grounded in evidence.
I do not like that her stating her opinions and arguments politely and in good faith is being described as “aggressive”. I think this kind of hostile attitude towards skeptics could be detrimental to the intellectual health of the movement.
As for your alignment thoughts, I have heard the arguments and disagree with them, but I’ll just link to my post on the subject rather than drag it in here.
I think calling their opinions “ungrounded speculation” is an entirely valid opinion, although I would personally use the more diplomatic term “insufficiently grounded speculation”.
I disagree on that. Whether politely said or not, it disqualifies another’s views without any arguments at all. It’s like saying “you’re talking bullshit”. Now, if you do that and then follow up with “because, as I can demonstrate, facts A and B clearly contradict your claim”, then that may be okay. But she didn’t do that.
She could have said things like “I don’t understand your argument”, or “I don’t see evidence for claim X”, or “I don’t believe Y is possible, because …”. Even better would be to ask: “Can you explain to me why you think an AI could become uncontrollable within the next 20 years?”, and then answer to the arguments.
I think we’ll just have to disagree on this point. I think it’s perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that (which she did, even if you think the arguments were weak). I think EA could benefit from more of a willingness to call out emperors with no clothes.
I think it’s perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that
Agreed.
(which she did, even if you think the arguments were weak)
That’s where we disagree—strong claims (“Two Turing-award winners talk nonsense when they point out the dangerousness of the technology they developed”) require strong evidence.
I didn’t state that. I think Mitchell was “dismissive” (even aggressively so) by calling the view of Tegmark, Bengio, and indirectly Hinton and others “ungrounded speculations”. I have no problem with someone stating that AGI is unlikely within a specific timeframe, even if I think that’s wrong.
I agree with most of what you wrote about the debate, although I don’t think that Mitchell presented any “good” arguments.
It probably would, but that’s not the point of the alignment problem. The problem is that even if it knows what we “really” want, it won’t care about it unless we find a way to align it with our values, needs, and wishes, which is a very hard problem (if you doubt that, I recommend watching this introduction). We understand pretty well what chickens, pigs, and cows want, but we still treat them very badly.
I think calling their opinions “ungrounded speculation” is an entirely valid opinion, although I would personally use the more diplomatic term “insufficiently grounded speculation”. She acknowledges that they have reasons for their speculation, but does not find those reasons to be sufficiently grounded in evidence.
I do not like that her stating her opinions and arguments politely and in good faith is being described as “aggressive”. I think this kind of hostile attitude towards skeptics could be detrimental to the intellectual health of the movement.
As for your alignment thoughts, I have heard the arguments and disagree with them, but I’ll just link to my post on the subject rather than drag it in here.
I disagree on that. Whether politely said or not, it disqualifies another’s views without any arguments at all. It’s like saying “you’re talking bullshit”. Now, if you do that and then follow up with “because, as I can demonstrate, facts A and B clearly contradict your claim”, then that may be okay. But she didn’t do that.
She could have said things like “I don’t understand your argument”, or “I don’t see evidence for claim X”, or “I don’t believe Y is possible, because …”. Even better would be to ask: “Can you explain to me why you think an AI could become uncontrollable within the next 20 years?”, and then answer to the arguments.
I think we’ll just have to disagree on this point. I think it’s perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that (which she did, even if you think the arguments were weak). I think EA could benefit from more of a willingness to call out emperors with no clothes.
Agreed.
That’s where we disagree—strong claims (“Two Turing-award winners talk nonsense when they point out the dangerousness of the technology they developed”) require strong evidence.