Effectively Handling Disagreements—Introducing a New Workshop

On May 25th, 2023, someone posted a review of How Minds Change on LessWrong. It talked about Street Epistemology, Deep Canvassing, and Smart Politics, ways of handling disagreements that open the possibility of rational belief progression through amicable discussions. Summarized quickly, they rely on active listening, sharing personal stories and socratic questioning.

You can now learn all of those three techniques online, for free, in 4 hours, and in a Deliberate Practice setting. If interested, you can also learn them in an in-person workshop spanning anytime between 2 hours and a full weekend -just shoot me an email with the object EHD (at the time of writing, I’m based in Paris, France).

You can enroll on the website (see bottom for subscribing to the mailing list), and join the discord server.

About the workshop:

What would you learn?

When you find yourself in disagreement with someone on a significant issue, and they might not share your perspectives or even show resistance towards them, it’s natural to seek a productive dialogue. The goal is to have a conversation that brings both parties closer to understanding the truth. However, jumping directly into counter-arguments often proves counterproductive, leading to further resistance or increasingly complex counterpoints. It’s easy to label the other person as “irrational” in these moments.

To navigate these conversations more effectively, I’m offering a workshop that introduces a range of techniques based on evidence and mutual agreement. These methods are designed to facilitate discussions about deeply held beliefs in a friendly manner, keeping the focus on the pursuit of truth.

Techniques are the following:

4h version:

12h version:
All the aforementioned plus Principled Negotiation and bits of Motivational Interviewing

Who is this for?

I’m mainly targeting people who are not used to such interactions, or feel frustrated by them -as such, you might not learn a lot if you are already used to managing high-stakes interactions.

In the specific case of Rationality/​EA, this would allow you to :

  • Expand the community’s awareness by easing exchanges with outsiders
    e.g. if you are a professional researcher in AI Safety wanting to discuss with other researchers who are skeptical of your field.

  • Carefully spread awareness about Rat/​EA-related ideas and cause areas
    e.g. you are talking about EA and someone starts being confrontational.

  • Improve the accuracy of LW’s /​ EA’s /​ -themes public perception
    e.g. if you meet someone in your local university or twitter thread who has beliefs about these themes you disagree with.

  • Help people inside and outside of the community to align their beliefs with truth
    e.g. if you’re leading a discussion about veganism during a fellowship.

Please note however that this is not exclusively thought for or dispensed to the aforementioned communities.

Why?

It’s important, as individuals and as a community, that we’re able to communicate effectively with people who disagree with us. I’d like to offer an opportunity for people to practice some skills together, such as managing an angry interlocutor, creating contact with someone who might identify us as opponents, and discussing both respectfully and rigorously with people whose beliefs seem very far from ours.

Why a workshop?

All techniques can be learned online. However, a workshop is often an important factor in kickstarting curiosity for them, as well as a good opportunity to practice in a secure environment. I also wanted to create a way to learn these effectively through deliberate practice, something I hadn’t met so far, but it’s not fully automated for the time being.

If you would like to learn the techniques outside of the workshop, you can check the resources at the bottom of this post.

Very long FAQ:

Who am I?

I’m Camille (pronounced [ka’mij]), I have a Master in Cognitive Science (Linguistics and Psychology of Reasoning). I’ve been interested in resolving conflictual discussions since 2018, involved in EA since 2020, and have mainly done community building in Paris since then. I have also studied acting.
I’ve been building this workshop since, well, May 2023, and it has undergone several phases. The beta is now officially over, although I plan to keep improving the workshop, based on coming iterations.

Can you attend if you don’t consider yourself an “EA” or “rationalist” ?

Yes ! Actually, this set of techniques is thought for pretty much anyone. My main target is not disagreement between EAs or between rationalists. I also hope this workshop to be available for a larger set of people who are outside these communities.

Is this manipulative?

All the techniques suggested are both evidence- and consent-based, and focused on agreeableness, respect and truth-seeking. I also give ethical guidelines for each workshop.

Who is funding this?

I have asked for funding but have not received any positive answer yet. So far, this is volunteer work.

How intense is this?

Medium to high intensity, depending on your ease with absorbing and applying information. I would e.g. suggest not to do high-intensity intellectual work or out-of-comfort-zone activities prior to the workshop.

You already took part in one format (in-person short, long /​ online), can you attend a workshop in another format?

Yes, but note that you might learn less than you’d expect otherwise.

What are people’s take on this?

Participants usually find it helpful and sometimes ask for longer versions. People who are somewhat hostile to rationality are usually skeptical of using it. “Noble selves”, that is, people who proactively refuse to say anything else than what appears in their mind, are usually hostile to it as well. I did meet one person, well ranked on Metaculus, who had doubts about its utility, yet thought the workshop was “cool” nonetheless. Participants who are already familiar with high-stakes interaction usually do not learn much. Finally, someone working in a governance-related area told me they thought it was better to be confrontative.

Meta

Is the workshop itself evidence-based?

Not for the time being, although the techniques taught have several levels of evidence-backing and the pedagogy itself does too. I plan to test the workshop’s near-term efficiency in a rigorous experiment. If the experiment is negative, I’ll immediately kill the project.

I’m doing this alone, is this unilateral?

Nope! I haven’t decided to launch this in my little corner without any regards to what the rest of the community thought about it.

I have received several green lights for a related idea, which I thought was more risky than the current version of the workshop. Several people convinced me to de-risk it, which lent the current version. I take it that the current version would receive the same green lights.

More importantly, this is not primordially about outreach, but more about managing disagreement with someone who already talks about a certain topic. I leave the responsibility of outreach per se to the participants of the workshop, and would encourage them to coordinate wisely before engaging in anything.

How do I handle [tricky epistemic topic]?

For the time being, the goal of the workshop is fairly limited: attempting to get you and your interlocutor to converge to true beliefs in non-technical situations, usually at the interface of EA/​Rationalism and the rest of the world. That is, you might be talking to an angry twitter folk -we’re not talking about discussing crucial plans or managing tricky memeplexes or cultures. I might tackle this aspect later but cannot commit to it for the time being.

Am I a better rationalist than an instructor, or the reverse?

I would consider myself a better instructor than rationalist, and would provocatively say that, for this workshop in particular, it doesn’t matter a lot. It doesn’t interact with rationalist memes -I do reference Baye’s rule at some point but do not teach it.

Do I make sure this isn’t harmful?

Yes! The ethics are, for the time being, inspired from the ethics of the aforementioned ressources and a bit more restrictive -especially concerning consent, social status, and minorities. A full 30 minutes of the workshop is dedicated to ethics. I also plan to check for potentially problematic participants and refuse them, and hope to favorise some specific profiles over others.

In the future, I plan to have clearer ethics of interaction, notably by inspiring myself from the work currently pursued on boundaries by Chipmonk.
I am unsure which topics will ultimately be recommended not to be discussed through this method, but I expect a few to be.

I am also open for critiques, and will try to update the workshop accordingly, at least as long as the most “expensive” critiques are rooted in evidence or verified in subsequent experiments. I will also treat critiques as costs which can be outweighed by benefits, and I expect a baserate of big failures even within strict guidance.

I am also open to shut down the workshop if it ends up harmful.

What do I plan for the future?

  • Fully autonomous online workshop and training gear, including a fully automated interlocutor with feedback on performance.

  • A discussion moderator bot.

  • A new method that synthesizes the best advice for handling difficult conversations.

  • Relying exclusively on field-extracted data rather than roleplay.

  • Custom interventions especially designed for specific actors /​ environments.

  • Potentially (depending on several green lights), a selective workshop on approaching people we disagree with when it is not obvious how to approach them.

  • Collaboration with organizations, to either do interventions or to formalize a preexisting method.

  • A sequence on the cogs of conversation.

People I’d love to connect with:

1-Programmers who could feel comfortable taking commitments doing or mentoring me to :

  • Design a webpage

  • Design online apps

    • That take user input

    • That allow for manual feedback by an instructor

    • That rely on custom versions of Chat-GPT

2-Mentors or relevant seniors willing to partake to an advisory board

3-Anyone who has a background in experiment design, linguistics, epistemology, psychology of reasoning/​argumentation, conversation analysis, argumentation theory, ethics of argumentation or interactions, social psychology /​ microsociology of argumentation /​ peace /​ conflict, eristology, socratic pedagogy, interpersonal communication, evidence-based education science, active inference or complex systems in general applied to small scale human-human interactions AND who are interested by this project.

4-Rationalists skilled in epistemics who’d check there are no negative second-order effects or unhealthy/​misaligned memes. The aim is obviously not to solve memeplex alignment, but to prevent damage as much as possible, controlling if things go alright, checking which trade-offs are worth it, and learning from the past mistakes of the other communities related to each of these techniques. Note that I don’t have a monopoly over the related memeplex, given that the aforementioned techniques have their own communities.

5-People with good socio-emotional skills or experienced in high-stakes interactions, for their perspective and input. Ultimately, I hope the workshop to be redesigned by people who primordially have a strong track record in the kinds of interactions the workshop promotes (mine is good, but not as strong as I’d wish).

Sources? Book recommendations?
For maximal legibility:

  • Deep Canvassing: Durably reducing transphobia: A field experiment on door-to-door canvassing, Brookman & Kala, Science, vol. 352, Issue 6282, pp. 220-224, 2016

  • Street Epistemology: Socratic Pedagogy, Critical Thinking, Moral Reasoning and Inmate Education: An Exploratory Study, Boghossian 2004 (PhD Thesis)

  • Toulmin Model: Fundamentals of Argumentation Theory, Routledge 2009, Eemeren et al.

  • Bayesian Socratic Tests are inspired from: The Erotetic Theory of Reasoning, Koralus and Mascarenhas 2014

  • Deliberate Practice: Deliberate Practice and Acquisition of Expert Performance: A General Overview, K. Anders Ericsson, 2008

The complete list is on this link.

An urgent take on this?

Please, consider putting it in the comment below. I’ll make sure to do a post summarizing people’s takes, especially if concerned with community health and possible downsides. This thing is still new, so I’d actually be grateful for people to share their concerns or add nuances. As you can see, I’m being careful, and I do plan to stay careful as the workshop scales up.

Many thanks to all the beta-testers who reviewed and gave feedback on the first versions.

Crossposted from LessWrong (37 points, 2 comments)