Quick addition, itâs also up on their youtube channel with an (automatic) transcript here
I donât think viewing âwonâ or âlostâ is a good frame for these debates. Instead, given that AI Safety and existential/âcatastrophic concerns are now a mainstream issue, debates like these are part of a consensus-making process for both the field and the general public. I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates arenât confined to LessWrong or in-the-know Twitter sniping anymore, and I think thatâs unironically a good thing. (Itâs also harder to dunk on someone to their face as opposed to their Twitter profile which is an added bonus).
For what itâs worth the changes were only a few percentage points, the voting system was defective as you mention. Both sides can claim victory (the pro-side âpeople still agree with usâ the anti-side âwe changed more minds in the debateâ). I think youâre a bit too hasty to extrapolate what the âAI Safety Sideâ strategy should be. I also think that the debate will live on in the public sphere after the end date, and I donât think that the ânoâ side really came off as substantially better than the âyesâ side, which should set off peopleâs concerns along the lines of âwait, so if it is a threat, we donât have reliable methods of alignment??â
However, like everyone, I do have my own opinions and biases about what to change for future debates, so I suppose itâd be honest to own up to them:
I think Yudkowsky would be a terrible choice for this kind of format and forum, though I suppose I could be wrong I really, really wouldnât want to risk it.
I think Tegmark probably comes across as the âweak-linkâ in this debate, especially whens compared to 3 other AI experts, and from what Iâve seen of the debate he also comes across less fluent/âeloquent than the others.[1]
Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a âslow-takeoffâ framing rather than a âhard-takeoffâ framing which Bengio/âHinton/âTegmark etc. seem to be pushing more.
I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates arenât confined to LessWrong or in-the-know Twitter sniping anymore, and I think thatâs unironically a good thing.
I agree.
I think youâre a bit too hasty to extrapolate what the âAI Safety Sideâ strategy should be.
That may well be true.
Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a âslow-takeoffâ framing rather than a âhard-takeoffâ framing which Bengio/âHinton/âTegmark etc. seem to be pushing more.
Quick addition, itâs also up on their youtube channel with an (automatic) transcript here
I donât think viewing âwonâ or âlostâ is a good frame for these debates. Instead, given that AI Safety and existential/âcatastrophic concerns are now a mainstream issue, debates like these are part of a consensus-making process for both the field and the general public. I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates arenât confined to LessWrong or in-the-know Twitter sniping anymore, and I think thatâs unironically a good thing. (Itâs also harder to dunk on someone to their face as opposed to their Twitter profile which is an added bonus).
For what itâs worth the changes were only a few percentage points, the voting system was defective as you mention. Both sides can claim victory (the pro-side âpeople still agree with usâ the anti-side âwe changed more minds in the debateâ). I think youâre a bit too hasty to extrapolate what the âAI Safety Sideâ strategy should be. I also think that the debate will live on in the public sphere after the end date, and I donât think that the ânoâ side really came off as substantially better than the âyesâ side, which should set off peopleâs concerns along the lines of âwait, so if it is a threat, we donât have reliable methods of alignment??â
However, like everyone, I do have my own opinions and biases about what to change for future debates, so I suppose itâd be honest to own up to them:
I think Yudkowsky would be a terrible choice for this kind of format and forum, though I suppose I could be wrong I really, really wouldnât want to risk it.
I think Tegmark probably comes across as the âweak-linkâ in this debate, especially whens compared to 3 other AI experts, and from what Iâve seen of the debate he also comes across less fluent/âeloquent than the others.[1]
Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a âslow-takeoffâ framing rather than a âhard-takeoffâ framing which Bengio/âHinton/âTegmark etc. seem to be pushing more.
This isnât valuable to truth-seeking, but it does have an impact of perceptions of legitimacy etc
I agree.
That may well be true.
Yes, he definitely would have been a good choice.