A similar version has been done before and this might risk duplicating it. I don’t think this is the case because the debate was hard to follow and not explicitly written with the indent of finding a joint belief.
That seems like a terrible attempt at adversarial collaboration, with a bunch of name calling and not much constructive engagement (and thus mostly interesting as a sociological exercise in understanding top AI researcher opinions). I am extremely not concerned about duplicating it!
To me the main issue with this plan will be finding an AI x-risk skeptic who actually cares enough to seriously engage and do this, and is competent enough to represent the opposing position well—my prediction is that the vast majority wouldn’t care enough to, and haven’t engaged that much with the arguments?
That seems like a terrible attempt at adversarial collaboration, with a bunch of name calling and not much constructive engagement (and thus mostly interesting as a sociological exercise in understanding top AI researcher opinions). I am extremely not concerned about duplicating it!
To me the main issue with this plan will be finding an AI x-risk skeptic who actually cares enough to seriously engage and do this, and is competent enough to represent the opposing position well—my prediction is that the vast majority wouldn’t care enough to, and haven’t engaged that much with the arguments?
Boaz Barak seems like a good person? Or even the tweet I linked to by Richard Ngo and Jacob Buckman