Not exactly what you’re describing, but MIRI and other safety researchers did the MIRI conversations and also sort of debated at events. They were helpful and I would be excited about having more, but I think there are at least three obstacles to identifying cruxes:
Yudkowsky just has the pessimism dial set way higher than anyone else (it’s not clear that this is wrong, but this makes it hard to debate whether a plan will work)
Often two research agendas are built in different ontologies, and this causes a lot of friction especially when researcher A’s ontology is unnatural to researcher B. See the comments to this for a long discussion on what counts as inner vs outer alignment.
Much of the disagreement comes down to research taste; see my comment here for an example of differences in opinion driven by taste.
That said, I’d be excited about debates between people with totally different views, e.g. Yudkowsky and Yann Lecun if it could happen...
Not exactly what you’re describing, but MIRI and other safety researchers did the MIRI conversations and also sort of debated at events. They were helpful and I would be excited about having more, but I think there are at least three obstacles to identifying cruxes:
Yudkowsky just has the pessimism dial set way higher than anyone else (it’s not clear that this is wrong, but this makes it hard to debate whether a plan will work)
Often two research agendas are built in different ontologies, and this causes a lot of friction especially when researcher A’s ontology is unnatural to researcher B. See the comments to this for a long discussion on what counts as inner vs outer alignment.
Much of the disagreement comes down to research taste; see my comment here for an example of differences in opinion driven by taste.
That said, I’d be excited about debates between people with totally different views, e.g. Yudkowsky and Yann Lecun if it could happen...