Take the heuristic that Tegmark employed in this debate: that the damage potential of human weapons has increased over time. He talks about how we went from sticks, to guns, to bombs, killing dozens, to hundreds, to millions.
This is undeniably a heuristic, but it’s used to prime people for his later logical arguments as to why AI is also dangerous, like these earlier technologies.
This is not a heuristic. It would be a heuristic if he had argued “Because weapons have increased in power over time, we can expect that AI will be even more dangerous in the future”. But that’s not what he did if I remember it correctly (unfortunately, I don’t have access to my notes on the debate right now, I may edit this comment later). However, he may have used this example as priming, which is not the same in my opinion.
Mitchell in particular seemed to argue that AI x-risk is unlikely and talking about it is just “ungrounded speculation” because fears have been overblown in the past, which would count as a heuristic, but I don’t think LeCun used it in the same way. But I admit that telling it apart isn’t easy.
The important point here is not so much whether using historical trends or other unrelated data in arguments is good or bad, it’s more whether the argument is built mainly on these. As I see it:
Tegmark and Bengio argued that we need to take x-risk from AI seriously because we can’t rule it out. They gave concrete evidence for that, e.g. the fast development of AI capabilities in the past years. Bengio mentioned how that had surprised him, so he had updated his probability. Both admitted that they didn’t know with certainty whether AI x-risk was real, but gave it a high enough probability to be concerned. Tegmark explicitly asked for “humbleness”: Because we don’t know, we need to be cautious.
LeCun mainly argued that we don’t need to be worried because nobody would be stupid enough to build a dangerous ASI without knowing how to control it. So in principle, he admitted that there would indeed be a risk if there was a probability that someone could be stupid enough to do just that. I think he was closer to Bengio’s and Tegmark’s viewpoints on this than to Mitchell’s.
Mitchell mainly argued that we shouldn’t take AI x-risk seriously because a) it is extremely unlikely that we’ll be able to build uncontrollable AI in the foreseeable future and b) talking about x-risk is dangerous because it takes away energy from “real” problems. a) was in direct contradiction to what LeCun said. The evidence she provided for it was mainly a heuristic (“people in the 60′s thought we were close to AGI, and it turned out they were wrong, so people are wrong now”) and an anthropomorphic view (“computers aren’t even alive, they can’t make their own decisions”), which I would also count as a heuristic (“humans are the only intelligent species, computers will never be like humans, therefore computers are very unlikely to ever be more intelligent than humans”), but this may be a misrepresentation of her views. In my opinion, she gave no evidence at all to justify her claim that two of the world’s leading AI experts (Hinton and Bengio) were doing “ungrounded speculation”. b) is irrelevant to the question debated and also a very bad argument IMO.
I admit that I’m biased and my analysis may be clouded by emotions. I’m concerned about the future of my three adult sons, and I think people arguing like LeCun and, even more so, Mitchell are carelessly endangering that future. That is true for your own future as well, of course.
Hey, thanks for engaging. I am also concerned about the future, which is why I think it’s incredibly important to get the facts right, and ensure that we aren’t being blinded by groupthink or poor judgment.
I don’t think your “heuristic” vs “argument” distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.
That weapons have gotten more deadly over time is evidence in favour of AI danger, it’s just weak evidence. That the AI field has previously fallen victim to overblown hype is evidence against imminent AI danger, it’s just weak evidence. We’re talking about a speculative event, extrapolation from the past/present is inevitable. What matters is how useful/strong such extrapolation are.
You talk about Tegmark citing recent advances in AI as ” concrete evidence” that a future AGI will be world domination capable. But obviously you can’t predict long term outcomes from short term trends. Lecun/mitchell retort that AI are still incredibly useless at many seemingly easy tasks, and that AI hype has occurred before, so they extrapolate the other way, to say it will stall out at some point.
Who is right? You can’t figure that out by the semantics of “heuristics”. To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate).
I don’t think your “heuristic” vs “argument” distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.
I agree in principle. However, I still think that there’s a difference between a heuristic and a logical conclusion. But not all heuristics are bad arguments. If I get an email from someone who wants to donate $10,000,000 to me, I use the heuristic that this is likely a scam without looking for further evidence. So yeah, heuristics can be very helpful. They’re just not very reliable in highly unusual situations. In German comments, I often read “Sam Altman wants to hype OpenAI by presenting it as potentially dangerous, so this open letter he signed must be hype”. That’s an example of how a heuristic can be misleading. It is ignoring the fact, for example, that Yoshua Bengio and Geoffrey Hinton also signed that letter.
You talk about Tegmark citing recent advances in AI as ” concrete evidence” that a future AGI will be world domination capable.
No. Tegmark cites this as concrete evidence that a future uncontrollable AGI is possible and that we shouldn’t carelessly dismiss this threat. He readily admits that there may be unforeseen obstacles, and so do I.
Who is right? You can’t figure that out by the semantics of “heuristics”. To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate).
Thank you for clarifying your view!
This is not a heuristic. It would be a heuristic if he had argued “Because weapons have increased in power over time, we can expect that AI will be even more dangerous in the future”. But that’s not what he did if I remember it correctly (unfortunately, I don’t have access to my notes on the debate right now, I may edit this comment later). However, he may have used this example as priming, which is not the same in my opinion.
Mitchell in particular seemed to argue that AI x-risk is unlikely and talking about it is just “ungrounded speculation” because fears have been overblown in the past, which would count as a heuristic, but I don’t think LeCun used it in the same way. But I admit that telling it apart isn’t easy.
The important point here is not so much whether using historical trends or other unrelated data in arguments is good or bad, it’s more whether the argument is built mainly on these. As I see it:
Tegmark and Bengio argued that we need to take x-risk from AI seriously because we can’t rule it out. They gave concrete evidence for that, e.g. the fast development of AI capabilities in the past years. Bengio mentioned how that had surprised him, so he had updated his probability. Both admitted that they didn’t know with certainty whether AI x-risk was real, but gave it a high enough probability to be concerned. Tegmark explicitly asked for “humbleness”: Because we don’t know, we need to be cautious.
LeCun mainly argued that we don’t need to be worried because nobody would be stupid enough to build a dangerous ASI without knowing how to control it. So in principle, he admitted that there would indeed be a risk if there was a probability that someone could be stupid enough to do just that. I think he was closer to Bengio’s and Tegmark’s viewpoints on this than to Mitchell’s.
Mitchell mainly argued that we shouldn’t take AI x-risk seriously because a) it is extremely unlikely that we’ll be able to build uncontrollable AI in the foreseeable future and b) talking about x-risk is dangerous because it takes away energy from “real” problems. a) was in direct contradiction to what LeCun said. The evidence she provided for it was mainly a heuristic (“people in the 60′s thought we were close to AGI, and it turned out they were wrong, so people are wrong now”) and an anthropomorphic view (“computers aren’t even alive, they can’t make their own decisions”), which I would also count as a heuristic (“humans are the only intelligent species, computers will never be like humans, therefore computers are very unlikely to ever be more intelligent than humans”), but this may be a misrepresentation of her views. In my opinion, she gave no evidence at all to justify her claim that two of the world’s leading AI experts (Hinton and Bengio) were doing “ungrounded speculation”. b) is irrelevant to the question debated and also a very bad argument IMO.
I admit that I’m biased and my analysis may be clouded by emotions. I’m concerned about the future of my three adult sons, and I think people arguing like LeCun and, even more so, Mitchell are carelessly endangering that future. That is true for your own future as well, of course.
Hey, thanks for engaging. I am also concerned about the future, which is why I think it’s incredibly important to get the facts right, and ensure that we aren’t being blinded by groupthink or poor judgment.
I don’t think your “heuristic” vs “argument” distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.
That weapons have gotten more deadly over time is evidence in favour of AI danger, it’s just weak evidence. That the AI field has previously fallen victim to overblown hype is evidence against imminent AI danger, it’s just weak evidence. We’re talking about a speculative event, extrapolation from the past/present is inevitable. What matters is how useful/strong such extrapolation are.
You talk about Tegmark citing recent advances in AI as ” concrete evidence” that a future AGI will be world domination capable. But obviously you can’t predict long term outcomes from short term trends. Lecun/mitchell retort that AI are still incredibly useless at many seemingly easy tasks, and that AI hype has occurred before, so they extrapolate the other way, to say it will stall out at some point.
Who is right? You can’t figure that out by the semantics of “heuristics”. To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate).
I agree in principle. However, I still think that there’s a difference between a heuristic and a logical conclusion. But not all heuristics are bad arguments. If I get an email from someone who wants to donate $10,000,000 to me, I use the heuristic that this is likely a scam without looking for further evidence. So yeah, heuristics can be very helpful. They’re just not very reliable in highly unusual situations. In German comments, I often read “Sam Altman wants to hype OpenAI by presenting it as potentially dangerous, so this open letter he signed must be hype”. That’s an example of how a heuristic can be misleading. It is ignoring the fact, for example, that Yoshua Bengio and Geoffrey Hinton also signed that letter.
No. Tegmark cites this as concrete evidence that a future uncontrollable AGI is possible and that we shouldn’t carelessly dismiss this threat. He readily admits that there may be unforeseen obstacles, and so do I.
I fully agree.