Facilitating conversations between top people in AI alignment (I’ve in particular heard very good things about the 3-day conversation between Eric Drexler and Scott Garrabrant that Eli facilitated)
I do indeed facilitate conversations between high level people in AI alignment. I have a standing offer to help with difficult conversations / intractable disagreements, between people working on x-risk or other EA causes.
(I’m aiming to develop methods for resolving the most intractable disagreements in the space. The more direct experience I have trying my existing methods against hard, “real” conversations, the faster that development process can go. So, at least for the moment, it actively helps me when people request my facilitation. And also, a number of people, including Eric and Scott, have found it to be helpful for the immediate conversation.)
However, I co-facilitated that particular conversation between Eric and Scott. The other facilitators were, Eliana Lorch, Anna Salamon, and Owen Cotton Barratt.
There’s the classic Double Crux post. Also, here’s a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.
If I were to say what I’m trying to do in a sentence: “Help the participants actually understand eachother.” Most people generally underestimate how hard this is, which is a large part of the problem.
The good thing that I’m aiming for in a conversation is when “that absurd / confused thing that X-person was saying, clicks into place, and it doesn’t just seem reasonable, it seems like a natural way to think about the situation”.
Another frame is, “Everything you need to do to make Double Crux actually work.”
A quick list of things conversational facilitation, as I do it, involves:
Tracking the state of mind of the participants. Tracking what’s at stake for each person.
Noticing when Double Illusion of Transparency, or talking past eachother, is happening, and having the participants paraphrase or operationalize. Or in the harder cases, getting each view myself, and then acting as an intermediary.
Identifying Double Cruxes.
Helping the participants to track what’s happening in the conversation and how this thread connects to the higher level goals. Cleaving to the query.
Keeping track of conversational threads, and promising conversational tacts.
Drawing out and helping to clarifying a person’s inarticulate objections, when they don’t buy an argument but can’t say why.
Ontological translation: getting each participants conceptual vocabulary to make natural sense to you, and then porting models and arguments back and forth between the differing conceptual vocabularies.
I don’t know if that helps. (I have some unpublished drafts on these topics. Eventually they’re to go on LessWrong, but I’m likely to publish rough versions on my musings and rough drafts blog, first.)
[Are there ways to delete a comment? I started to write a comment here, and then added a bit to the top-level instead. Now I can’t make this comment go away?]
A small correction:
I do indeed facilitate conversations between high level people in AI alignment. I have a standing offer to help with difficult conversations / intractable disagreements, between people working on x-risk or other EA causes.
(I’m aiming to develop methods for resolving the most intractable disagreements in the space. The more direct experience I have trying my existing methods against hard, “real” conversations, the faster that development process can go. So, at least for the moment, it actively helps me when people request my facilitation. And also, a number of people, including Eric and Scott, have found it to be helpful for the immediate conversation.)
However, I co-facilitated that particular conversation between Eric and Scott. The other facilitators were, Eliana Lorch, Anna Salamon, and Owen Cotton Barratt.
Will update to say “help facilitate”. Thanks for the correction!
Is there any resource (eg blogpost) for people curious about what “facilitating conversations” involves?
At the moment, not really.
There’s the classic Double Crux post. Also, here’s a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.
If I were to say what I’m trying to do in a sentence: “Help the participants actually understand eachother.” Most people generally underestimate how hard this is, which is a large part of the problem.
The good thing that I’m aiming for in a conversation is when “that absurd / confused thing that X-person was saying, clicks into place, and it doesn’t just seem reasonable, it seems like a natural way to think about the situation”.
Another frame is, “Everything you need to do to make Double Crux actually work.”
A quick list of things conversational facilitation, as I do it, involves:
Tracking the state of mind of the participants. Tracking what’s at stake for each person.
Noticing when Double Illusion of Transparency, or talking past eachother, is happening, and having the participants paraphrase or operationalize. Or in the harder cases, getting each view myself, and then acting as an intermediary.
Identifying Double Cruxes.
Helping the participants to track what’s happening in the conversation and how this thread connects to the higher level goals. Cleaving to the query.
Keeping track of conversational threads, and promising conversational tacts.
Drawing out and helping to clarifying a person’s inarticulate objections, when they don’t buy an argument but can’t say why.
Ontological translation: getting each participants conceptual vocabulary to make natural sense to you, and then porting models and arguments back and forth between the differing conceptual vocabularies.
I don’t know if that helps. (I have some unpublished drafts on these topics. Eventually they’re to go on LessWrong, but I’m likely to publish rough versions on my musings and rough drafts blog, first.)
Yes, that helps, thanks. “Mediating” might be a word which would convey the idea better.
[Are there ways to delete a comment? I started to write a comment here, and then added a bit to the top-level instead. Now I can’t make this comment go away?]