Thanks for the list; it’s the most helpful response for me so far. I’ll try responding to one thing at a time.
Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.
I think you’re saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you’re not sure structured debate would offer a large improvement.
I think one of EA’s goals is to persuade non-EAs of various ideas, e.g. that AI Safety is important. Would a structured debate method help with talking to non-EAs?
Non-EAs have fewer shared norms with EAs, so it’s harder to rely on norms to make debate productive. Saying “Please read our rationality literature and learn our norms so that then it’ll be easier for us to persuade you about AI Safety.” is tough. Outsiders may be skeptical that EA norms and debates are as rational and non-adversarial as claimed, and may not want to learn a bunch of stuff before hearing the AI Safety arguments. But if you share the arguments first, they may respond in an adversarial or irrational way.
Compared to norms, written debate steps and rules are easier to share with others, simpler (and therefore faster to learn), easier to follow by good-faith actors (because they’re more specific and concrete than norms), and easier to point out deviations from.
In other words, I think replacing vague or unwritten norms with more specific, concrete, explicit rules is especially helpful when talking with people who are significantly different than you are. It has a larger impact on those discussions. It helps deal with culture clash and differences in background knowledge or context.
Thanks for the list; it’s the most helpful response for me so far. I’ll try responding to one thing at a time.
I think you’re saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you’re not sure structured debate would offer a large improvement.
I think one of EA’s goals is to persuade non-EAs of various ideas, e.g. that AI Safety is important. Would a structured debate method help with talking to non-EAs?
Non-EAs have fewer shared norms with EAs, so it’s harder to rely on norms to make debate productive. Saying “Please read our rationality literature and learn our norms so that then it’ll be easier for us to persuade you about AI Safety.” is tough. Outsiders may be skeptical that EA norms and debates are as rational and non-adversarial as claimed, and may not want to learn a bunch of stuff before hearing the AI Safety arguments. But if you share the arguments first, they may respond in an adversarial or irrational way.
Compared to norms, written debate steps and rules are easier to share with others, simpler (and therefore faster to learn), easier to follow by good-faith actors (because they’re more specific and concrete than norms), and easier to point out deviations from.
In other words, I think replacing vague or unwritten norms with more specific, concrete, explicit rules is especially helpful when talking with people who are significantly different than you are. It has a larger impact on those discussions. It helps deal with culture clash and differences in background knowledge or context.