Hey, thanks for engaging. I am also concerned about the future, which is why I think it’s incredibly important to get the facts right, and ensure that we aren’t being blinded by groupthink or poor judgment.
I don’t think your “heuristic” vs “argument” distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.
That weapons have gotten more deadly over time is evidence in favour of AI danger, it’s just weak evidence. That the AI field has previously fallen victim to overblown hype is evidence against imminent AI danger, it’s just weak evidence. We’re talking about a speculative event, extrapolation from the past/present is inevitable. What matters is how useful/strong such extrapolation are.
You talk about Tegmark citing recent advances in AI as ” concrete evidence” that a future AGI will be world domination capable. But obviously you can’t predict long term outcomes from short term trends. Lecun/mitchell retort that AI are still incredibly useless at many seemingly easy tasks, and that AI hype has occurred before, so they extrapolate the other way, to say it will stall out at some point.
Who is right? You can’t figure that out by the semantics of “heuristics”. To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate).
I don’t think your “heuristic” vs “argument” distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.
I agree in principle. However, I still think that there’s a difference between a heuristic and a logical conclusion. But not all heuristics are bad arguments. If I get an email from someone who wants to donate $10,000,000 to me, I use the heuristic that this is likely a scam without looking for further evidence. So yeah, heuristics can be very helpful. They’re just not very reliable in highly unusual situations. In German comments, I often read “Sam Altman wants to hype OpenAI by presenting it as potentially dangerous, so this open letter he signed must be hype”. That’s an example of how a heuristic can be misleading. It is ignoring the fact, for example, that Yoshua Bengio and Geoffrey Hinton also signed that letter.
You talk about Tegmark citing recent advances in AI as ” concrete evidence” that a future AGI will be world domination capable.
No. Tegmark cites this as concrete evidence that a future uncontrollable AGI is possible and that we shouldn’t carelessly dismiss this threat. He readily admits that there may be unforeseen obstacles, and so do I.
Who is right? You can’t figure that out by the semantics of “heuristics”. To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate).
Hey, thanks for engaging. I am also concerned about the future, which is why I think it’s incredibly important to get the facts right, and ensure that we aren’t being blinded by groupthink or poor judgment.
I don’t think your “heuristic” vs “argument” distinction is sufficiently coherent to be useful. I prefer to think of it all as evidence, and talk about the strength of that evidence.
That weapons have gotten more deadly over time is evidence in favour of AI danger, it’s just weak evidence. That the AI field has previously fallen victim to overblown hype is evidence against imminent AI danger, it’s just weak evidence. We’re talking about a speculative event, extrapolation from the past/present is inevitable. What matters is how useful/strong such extrapolation are.
You talk about Tegmark citing recent advances in AI as ” concrete evidence” that a future AGI will be world domination capable. But obviously you can’t predict long term outcomes from short term trends. Lecun/mitchell retort that AI are still incredibly useless at many seemingly easy tasks, and that AI hype has occurred before, so they extrapolate the other way, to say it will stall out at some point.
Who is right? You can’t figure that out by the semantics of “heuristics”. To get an actual answer, you have to dig into actual research on capabilities and limitations, which was not done by anybody in this debate (mainly because it would have been too technical for a public-facing debate).
I agree in principle. However, I still think that there’s a difference between a heuristic and a logical conclusion. But not all heuristics are bad arguments. If I get an email from someone who wants to donate $10,000,000 to me, I use the heuristic that this is likely a scam without looking for further evidence. So yeah, heuristics can be very helpful. They’re just not very reliable in highly unusual situations. In German comments, I often read “Sam Altman wants to hype OpenAI by presenting it as potentially dangerous, so this open letter he signed must be hype”. That’s an example of how a heuristic can be misleading. It is ignoring the fact, for example, that Yoshua Bengio and Geoffrey Hinton also signed that letter.
No. Tegmark cites this as concrete evidence that a future uncontrollable AGI is possible and that we shouldn’t carelessly dismiss this threat. He readily admits that there may be unforeseen obstacles, and so do I.
I fully agree.