I feel like people might be interested in my opinions on the debate, as an AI doom skeptic.
I think that this live debate format is not particularly useful in truth-finding, and in this case mostly ended up with shallow, 101 level arguments (on all sides). Nonetheless, it was a decent discussion, and I think everyone was polite and good faith. I definitely disagree with the OP that Mitchell was being “dismissive” for stating her honest belief that near-term AGI is unlikely. This is a completely valid position held by a significant portion of AI researchers.
I thought Bengio was the best debater of the bunch, as he was calm and focused on the most convincing element of the AI risk argument (that of malicious actors misusing AI). I respect that he emphasised his own uncertainty a lot.
Lecunn did a lot better than I was expecting, and I think laid his case pretty well and made a lot of good arguments. I think he might have misled the audience a little by not stating that his plan for controllable AI was speculative in nature.
I found Tegmark to be quite bad in this debate. He overrelied upon unsupported speculation and appeals to authority, and didn’t really respond to his opponents in his rebuttals. I found his continuous asking of probabilities to be a poor move: this may be the norm in EA spaces, but it looks very annoying to outsiders and is barely an argument. I think he is unused to communicating to the general public, I spotted a few occasions of using the terms “alignment” and “safety” in ways that would not have been obvious to an onlooker.
Mitchell was generally fine and did bring up some good arguments (like that intelligent AI would be able to figure out what we wanted), but it felt like she was a little unprepared and wasn’t good at responding to the others arguments. I think she would have done well to research the counterarguments to her points in order to address them better.
Overall, I thought it was generally fine as an intro to the layman.
I definitely disagree with the OP that Mitchell was being “dismissive” for stating her honest belief that near-term AGI is unlikely. This is a completely valid position held by a significant portion of AI researchers.
I didn’t state that. I think Mitchell was “dismissive” (even aggressively so) by calling the view of Tegmark, Bengio, and indirectly Hinton and others “ungrounded speculations”. I have no problem with someone stating that AGI is unlikely within a specific timeframe, even if I think that’s wrong.
I agree with most of what you wrote about the debate, although I don’t think that Mitchell presented any “good” arguments.
intelligent AI would be able to figure out what we wanted
It probably would, but that’s not the point of the alignment problem. The problem is that even if it knows what we “really” want, it won’t care about it unless we find a way to align it with our values, needs, and wishes, which is a very hard problem (if you doubt that, I recommend watching this introduction). We understand pretty well what chickens, pigs, and cows want, but we still treat them very badly.
I think calling their opinions “ungrounded speculation” is an entirely valid opinion, although I would personally use the more diplomatic term “insufficiently grounded speculation”. She acknowledges that they have reasons for their speculation, but does not find those reasons to be sufficiently grounded in evidence.
I do not like that her stating her opinions and arguments politely and in good faith is being described as “aggressive”. I think this kind of hostile attitude towards skeptics could be detrimental to the intellectual health of the movement.
As for your alignment thoughts, I have heard the arguments and disagree with them, but I’ll just link to my post on the subject rather than drag it in here.
I think calling their opinions “ungrounded speculation” is an entirely valid opinion, although I would personally use the more diplomatic term “insufficiently grounded speculation”.
I disagree on that. Whether politely said or not, it disqualifies another’s views without any arguments at all. It’s like saying “you’re talking bullshit”. Now, if you do that and then follow up with “because, as I can demonstrate, facts A and B clearly contradict your claim”, then that may be okay. But she didn’t do that.
She could have said things like “I don’t understand your argument”, or “I don’t see evidence for claim X”, or “I don’t believe Y is possible, because …”. Even better would be to ask: “Can you explain to me why you think an AI could become uncontrollable within the next 20 years?”, and then answer to the arguments.
I think we’ll just have to disagree on this point. I think it’s perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that (which she did, even if you think the arguments were weak). I think EA could benefit from more of a willingness to call out emperors with no clothes.
I think it’s perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that
Agreed.
(which she did, even if you think the arguments were weak)
That’s where we disagree—strong claims (“Two Turing-award winners talk nonsense when they point out the dangerousness of the technology they developed”) require strong evidence.
I feel like people might be interested in my opinions on the debate, as an AI doom skeptic.
I think that this live debate format is not particularly useful in truth-finding, and in this case mostly ended up with shallow, 101 level arguments (on all sides). Nonetheless, it was a decent discussion, and I think everyone was polite and good faith. I definitely disagree with the OP that Mitchell was being “dismissive” for stating her honest belief that near-term AGI is unlikely. This is a completely valid position held by a significant portion of AI researchers.
I thought Bengio was the best debater of the bunch, as he was calm and focused on the most convincing element of the AI risk argument (that of malicious actors misusing AI). I respect that he emphasised his own uncertainty a lot.
Lecunn did a lot better than I was expecting, and I think laid his case pretty well and made a lot of good arguments. I think he might have misled the audience a little by not stating that his plan for controllable AI was speculative in nature.
I found Tegmark to be quite bad in this debate. He overrelied upon unsupported speculation and appeals to authority, and didn’t really respond to his opponents in his rebuttals. I found his continuous asking of probabilities to be a poor move: this may be the norm in EA spaces, but it looks very annoying to outsiders and is barely an argument. I think he is unused to communicating to the general public, I spotted a few occasions of using the terms “alignment” and “safety” in ways that would not have been obvious to an onlooker.
Mitchell was generally fine and did bring up some good arguments (like that intelligent AI would be able to figure out what we wanted), but it felt like she was a little unprepared and wasn’t good at responding to the others arguments. I think she would have done well to research the counterarguments to her points in order to address them better.
Overall, I thought it was generally fine as an intro to the layman.
I didn’t state that. I think Mitchell was “dismissive” (even aggressively so) by calling the view of Tegmark, Bengio, and indirectly Hinton and others “ungrounded speculations”. I have no problem with someone stating that AGI is unlikely within a specific timeframe, even if I think that’s wrong.
I agree with most of what you wrote about the debate, although I don’t think that Mitchell presented any “good” arguments.
It probably would, but that’s not the point of the alignment problem. The problem is that even if it knows what we “really” want, it won’t care about it unless we find a way to align it with our values, needs, and wishes, which is a very hard problem (if you doubt that, I recommend watching this introduction). We understand pretty well what chickens, pigs, and cows want, but we still treat them very badly.
I think calling their opinions “ungrounded speculation” is an entirely valid opinion, although I would personally use the more diplomatic term “insufficiently grounded speculation”. She acknowledges that they have reasons for their speculation, but does not find those reasons to be sufficiently grounded in evidence.
I do not like that her stating her opinions and arguments politely and in good faith is being described as “aggressive”. I think this kind of hostile attitude towards skeptics could be detrimental to the intellectual health of the movement.
As for your alignment thoughts, I have heard the arguments and disagree with them, but I’ll just link to my post on the subject rather than drag it in here.
I disagree on that. Whether politely said or not, it disqualifies another’s views without any arguments at all. It’s like saying “you’re talking bullshit”. Now, if you do that and then follow up with “because, as I can demonstrate, facts A and B clearly contradict your claim”, then that may be okay. But she didn’t do that.
She could have said things like “I don’t understand your argument”, or “I don’t see evidence for claim X”, or “I don’t believe Y is possible, because …”. Even better would be to ask: “Can you explain to me why you think an AI could become uncontrollable within the next 20 years?”, and then answer to the arguments.
I think we’ll just have to disagree on this point. I think it’s perfectly fine to (politely) call bullshit, if you think something is bullshit, as long as you follow it up with arguments as to why you think that (which she did, even if you think the arguments were weak). I think EA could benefit from more of a willingness to call out emperors with no clothes.
Agreed.
That’s where we disagree—strong claims (“Two Turing-award winners talk nonsense when they point out the dangerousness of the technology they developed”) require strong evidence.