List of reasons I think EA takes better actions than most movements, in no particular order:
taking weird ideas seriously; being willing to think carefully about them and dedicate careers to them
being unusually goal-directed
being unusually truth-seeking
this makes debates non-adversarial, which is easy mode
openness to criticism, plus a decent method of filtering it
high average intelligence. Doesn’t imply rationality but doesn’t hurt.
numeracy and scope-sensitivity
willingness to use math in decisions when appropriate (e.g. EV calculations) is only part of this
less human misalignment: EAs have similar goals and so EA doesn’t waste tons of energy on corruption, preventing corruption, negotiation, etc.
relative lack of bureaucracy
various epistemic technologies taken from other communities: double-crux, forecasting
ideas from EA and its predecessors: crucial considerations, the ITN framework, etc.
taste: for some reason, EAs are able to (hopefully correctly) allocate more resources to AI alignment than overpopulation or the energy decline, for reasons not explained by the above.
Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.
Thanks for the list; it’s the most helpful response for me so far. I’ll try responding to one thing at a time.
Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.
I think you’re saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you’re not sure structured debate would offer a large improvement.
I think one of EA’s goals is to persuade non-EAs of various ideas, e.g. that AI Safety is important. Would a structured debate method help with talking to non-EAs?
Non-EAs have fewer shared norms with EAs, so it’s harder to rely on norms to make debate productive. Saying “Please read our rationality literature and learn our norms so that then it’ll be easier for us to persuade you about AI Safety.” is tough. Outsiders may be skeptical that EA norms and debates are as rational and non-adversarial as claimed, and may not want to learn a bunch of stuff before hearing the AI Safety arguments. But if you share the arguments first, they may respond in an adversarial or irrational way.
Compared to norms, written debate steps and rules are easier to share with others, simpler (and therefore faster to learn), easier to follow by good-faith actors (because they’re more specific and concrete than norms), and easier to point out deviations from.
In other words, I think replacing vague or unwritten norms with more specific, concrete, explicit rules is especially helpful when talking with people who are significantly different than you are. It has a larger impact on those discussions. It helps deal with culture clash and differences in background knowledge or context.
taste: for some reason, EAs are able to (hopefully correctly) allocate more resources to AI alignment than overpopulation or the energy decline, for reasons not explained by the above.
Of course, in the eyes of the people warning about energy depletion, expecting energy growth to continue over decades is not the rational decision ^^
I mean, 85% of energy comes from a finite stock, and all renewables currently need this stock to build and maintain renewables, so from the outside that seems at least worth exploring seriously—but I feel like very few people really considered the issue in EA (as said here).
Which is normal, very little prominent figures are warning about it, and the best arguments are rarely put forward. There are a few people talking about this in France, but without them I think I’d have ignored this topic, like everybody.
So I’d argue that exposition to a problem matters greatly as well.
List of reasons I think EA takes better actions than most movements, in no particular order:
taking weird ideas seriously; being willing to think carefully about them and dedicate careers to them
being unusually goal-directed
being unusually truth-seeking
this makes debates non-adversarial, which is easy mode
openness to criticism, plus a decent method of filtering it
high average intelligence. Doesn’t imply rationality but doesn’t hurt.
numeracy and scope-sensitivity
willingness to use math in decisions when appropriate (e.g. EV calculations) is only part of this
less human misalignment: EAs have similar goals and so EA doesn’t waste tons of energy on corruption, preventing corruption, negotiation, etc.
relative lack of bureaucracy
various epistemic technologies taken from other communities: double-crux, forecasting
ideas from EA and its predecessors: crucial considerations, the ITN framework, etc.
taste: for some reason, EAs are able to (hopefully correctly) allocate more resources to AI alignment than overpopulation or the energy decline, for reasons not explained by the above.
Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.
Thanks for the list; it’s the most helpful response for me so far. I’ll try responding to one thing at a time.
I think you’re saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you’re not sure structured debate would offer a large improvement.
I think one of EA’s goals is to persuade non-EAs of various ideas, e.g. that AI Safety is important. Would a structured debate method help with talking to non-EAs?
Non-EAs have fewer shared norms with EAs, so it’s harder to rely on norms to make debate productive. Saying “Please read our rationality literature and learn our norms so that then it’ll be easier for us to persuade you about AI Safety.” is tough. Outsiders may be skeptical that EA norms and debates are as rational and non-adversarial as claimed, and may not want to learn a bunch of stuff before hearing the AI Safety arguments. But if you share the arguments first, they may respond in an adversarial or irrational way.
Compared to norms, written debate steps and rules are easier to share with others, simpler (and therefore faster to learn), easier to follow by good-faith actors (because they’re more specific and concrete than norms), and easier to point out deviations from.
In other words, I think replacing vague or unwritten norms with more specific, concrete, explicit rules is especially helpful when talking with people who are significantly different than you are. It has a larger impact on those discussions. It helps deal with culture clash and differences in background knowledge or context.
Of course, in the eyes of the people warning about energy depletion, expecting energy growth to continue over decades is not the rational decision ^^
I mean, 85% of energy comes from a finite stock, and all renewables currently need this stock to build and maintain renewables, so from the outside that seems at least worth exploring seriously—but I feel like very few people really considered the issue in EA (as said here).
Which is normal, very little prominent figures are warning about it, and the best arguments are rarely put forward. There are a few people talking about this in France, but without them I think I’d have ignored this topic, like everybody.
So I’d argue that exposition to a problem matters greatly as well.
I critiqued the list of points in https://forum.effectivealtruism.org/posts/7urvvbJgPyrJoGXq4/fallibilism-bias-and-the-rule-of-law