Quick addition, it’s also up on their youtube channel with an (automatic) transcript here
I don’t think viewing ‘won’ or ‘lost’ is a good frame for these debates. Instead, given that AI Safety and existential/catastrophic concerns are now a mainstream issue, debates like these are part of a consensus-making process for both the field and the general public. I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates aren’t confined to LessWrong or in-the-know Twitter sniping anymore, and I think that’s unironically a good thing. (It’s also harder to dunk on someone to their face as opposed to their Twitter profile which is an added bonus).
For what it’s worth the changes were only a few percentage points, the voting system was defective as you mention. Both sides can claim victory (the pro-side “people still agree with us” the anti-side “we changed more minds in the debate”). I think you’re a bit too hasty to extrapolate what the ‘AI Safety Side’ strategy should be. I also think that the debate will live on in the public sphere after the end date, and I don’t think that the ‘no’ side really came off as substantially better than the ‘yes’ side, which should set off people’s concerns along the lines of “wait, so if it is a threat, we don’t have reliable methods of alignment??”
However, like everyone, I do have my own opinions and biases about what to change for future debates, so I suppose it’d be honest to own up to them:
I think Yudkowsky would be a terrible choice for this kind of format and forum, though I suppose I could be wrong I really, really wouldn’t want to risk it.
I think Tegmark probably comes across as the ‘weak-link’ in this debate, especially whens compared to 3 other AI experts, and from what I’ve seen of the debate he also comes across less fluent/eloquent than the others.[1]
Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a “slow-takeoff” framing rather than a “hard-takeoff” framing which Bengio/Hinton/Tegmark etc. seem to be pushing more.
I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates aren’t confined to LessWrong or in-the-know Twitter sniping anymore, and I think that’s unironically a good thing.
I agree.
I think you’re a bit too hasty to extrapolate what the ‘AI Safety Side’ strategy should be.
That may well be true.
Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a “slow-takeoff” framing rather than a “hard-takeoff” framing which Bengio/Hinton/Tegmark etc. seem to be pushing more.
Quick addition, it’s also up on their youtube channel with an (automatic) transcript here
I don’t think viewing ‘won’ or ‘lost’ is a good frame for these debates. Instead, given that AI Safety and existential/catastrophic concerns are now a mainstream issue, debates like these are part of a consensus-making process for both the field and the general public. I think the value of these debates lies in normalising this issue as one that is valid to have a debate about in the public sphere. These debates aren’t confined to LessWrong or in-the-know Twitter sniping anymore, and I think that’s unironically a good thing. (It’s also harder to dunk on someone to their face as opposed to their Twitter profile which is an added bonus).
For what it’s worth the changes were only a few percentage points, the voting system was defective as you mention. Both sides can claim victory (the pro-side “people still agree with us” the anti-side “we changed more minds in the debate”). I think you’re a bit too hasty to extrapolate what the ‘AI Safety Side’ strategy should be. I also think that the debate will live on in the public sphere after the end date, and I don’t think that the ‘no’ side really came off as substantially better than the ‘yes’ side, which should set off people’s concerns along the lines of “wait, so if it is a threat, we don’t have reliable methods of alignment??”
However, like everyone, I do have my own opinions and biases about what to change for future debates, so I suppose it’d be honest to own up to them:
I think Yudkowsky would be a terrible choice for this kind of format and forum, though I suppose I could be wrong I really, really wouldn’t want to risk it.
I think Tegmark probably comes across as the ‘weak-link’ in this debate, especially whens compared to 3 other AI experts, and from what I’ve seen of the debate he also comes across less fluent/eloquent than the others.[1]
Personally think that Stuart Russell would be a great spokesman for the AI-risk-is-serious side, impeccable credentials, has debated the issue before (see here vs Melanie), and I think his persepective on AI risk lends itself to a “slow-takeoff” framing rather than a “hard-takeoff” framing which Bengio/Hinton/Tegmark etc. seem to be pushing more.
This isn’t valuable to truth-seeking, but it does have an impact of perceptions of legitimacy etc
I agree.
That may well be true.
Yes, he definitely would have been a good choice.