I asked if EA has a rational debate methodology in writing that people sometimes use. The answer seems to be “no”.
I asked if EA has any alternative to rationally resolve disagreements. The answer seems to be “no”.
If the correct answer to either question is actually “yes”, please let me know by responding to that question.
My questions were intended to form a complete pair. Do you use X for rationality, and if not do you use anything other than X?
Does EA have some other way of being rational which wasn’t covered by either question? Or is something else going on?
My understanding is that rationality is crucial to EA’s mission (of basically applying rationality, math, evidence, etc., to charity – which sounds great to me) so I think the issue I’m raising is important and relevant.
I think people try pretty hard to come to accurate answers given the information available, and have inherited or come up with various tools for this (e.g. probabilistic forecasting). Whether that counts as “rationality” or not depends a lot on your definitions of what it means to be rational, and how low your bar is.
I don’t think we’re perfectly rational, and there’s an argument that we aren’t investing as much resources as optimal for rationality or epistemics-enhancing interventions. But it’s pretty hard to answer a broad question like “How Is EA Rational?”, and I don’t think the crux is a specific form of argument mapping that we use or don’t use.
At face value, the answer is something like we’re reasonably good at coming to accurate enough answers to hard-ish questions. Whether this is “good enough” depends on whether “accurate enough” is good enough, how hard the questions we ultimately want to solve are, and whether/how much we can do better given the resources available.
But I don’t think this is exactly what you’re asking. In sum, I don’t think “is X rational” has a binary answer.
Bias and irrationality are huge problems today. Should I make an effort to do better? Yes. Should I trust myself? No – at least as little as possible. It’s better to assume I will fail sometimes and design around that. E.g. what policies would limit the negative impact of the times I am biased? What constraints or rules can I impose on myself so that my irrationalities have less impact?
So when I see an answer like “I think people [at EA] try pretty hard [… to be rational]”, I find it unsatisfactory. Trying is good, but I think planning for failures of rationality is needed. Being above average at rationality, and trying more than most people, can actually, paradoxically, partly make things worse, because it can reduce how much people plan for rationality failures.
Following written debate methods is one way to reduce the impact of bias and irrationality. I might be very biased but not find any loophole in the debate rules that lets my bias win. Similarly, transparency policies help reduce the impact of bias – when I don’t have the option to hide what I’m doing, and I have to explain myself, then I won’t take some biased actions because I don’t see how to get away with them (or I may do them anyway, get caught, and be overruled so the problem is fixed).
We should develop as much rationality and integrity as we can. But I think we should also work to reduce the need for personal rationality and integrity by building some rationality and integrity into rules and policies. We should limit our reliance on personal rationality and integrity. Explicit rules and policies, and other constraints against arbitrary action, help with that.
I think this is possible but will mostly come from arrogance and ignoring big rationality failures after getting small wins
For example, you can wear your more busy (and possibly more knowledgeable) interlocutors down with boredom.
I agree that relying entirely on personal rationality/integrity is not sufficient. To make up for individual failings, I feel more optimistic about cultural and maybe technological shifts than rules and policies. Top-down rules and policies especially feel a bit suss to me, given the lack of a track record.
List of reasons I think EA takes better actions than most movements, in no particular order:
taking weird ideas seriously; being willing to think carefully about them and dedicate careers to them
being unusually goal-directed
being unusually truth-seeking
this makes debates non-adversarial, which is easy mode
openness to criticism, plus a decent method of filtering it
high average intelligence. Doesn’t imply rationality but doesn’t hurt.
numeracy and scope-sensitivity
willingness to use math in decisions when appropriate (e.g. EV calculations) is only part of this
less human misalignment: EAs have similar goals and so EA doesn’t waste tons of energy on corruption, preventing corruption, negotiation, etc.
relative lack of bureaucracy
various epistemic technologies taken from other communities: double-crux, forecasting
ideas from EA and its predecessors: crucial considerations, the ITN framework, etc.
taste: for some reason, EAs are able to (hopefully correctly) allocate more resources to AI alignment than overpopulation or the energy decline, for reasons not explained by the above.
Structured debate mechanisms are not on this list, and I doubt they would make a huge difference because the debates are non-adversarial, but if one could be found it would be a good addition to the list, and therefore a source of lots of positive impact.
Thanks for the list; it’s the most helpful response for me so far. I’ll try responding to one thing at a time.
I think you’re saying that debates between EAs are usually non-adversarial. Due to good norms, they’re unusually productive, so you’re not sure structured debate would offer a large improvement.
I think one of EA’s goals is to persuade non-EAs of various ideas, e.g. that AI Safety is important. Would a structured debate method help with talking to non-EAs?
Non-EAs have fewer shared norms with EAs, so it’s harder to rely on norms to make debate productive. Saying “Please read our rationality literature and learn our norms so that then it’ll be easier for us to persuade you about AI Safety.” is tough. Outsiders may be skeptical that EA norms and debates are as rational and non-adversarial as claimed, and may not want to learn a bunch of stuff before hearing the AI Safety arguments. But if you share the arguments first, they may respond in an adversarial or irrational way.
Compared to norms, written debate steps and rules are easier to share with others, simpler (and therefore faster to learn), easier to follow by good-faith actors (because they’re more specific and concrete than norms), and easier to point out deviations from.
In other words, I think replacing vague or unwritten norms with more specific, concrete, explicit rules is especially helpful when talking with people who are significantly different than you are. It has a larger impact on those discussions. It helps deal with culture clash and differences in background knowledge or context.
Of course, in the eyes of the people warning about energy depletion, expecting energy growth to continue over decades is not the rational decision ^^
I mean, 85% of energy comes from a finite stock, and all renewables currently need this stock to build and maintain renewables, so from the outside that seems at least worth exploring seriously—but I feel like very few people really considered the issue in EA (as said here).
Which is normal, very little prominent figures are warning about it, and the best arguments are rarely put forward. There are a few people talking about this in France, but without them I think I’d have ignored this topic, like everybody.
So I’d argue that exposition to a problem matters greatly as well.
I critiqued the list of points in https://forum.effectivealtruism.org/posts/7urvvbJgPyrJoGXq4/fallibilism-bias-and-the-rule-of-law
I think, based on the way you’re phrasing your question, you’re perhaps not fully grasping the key ideas of Less Wrong style rationality, which is what EA rationality is mostly about. It might help to read something like this post about what rationality is and isn’t as a starting point, and from there explore the Less Wrong sequences.
I’ve read the sequences in full.
No offense, but I’m surprised, because your phrasing doesn’t parse for me, since it’s not clear to me what it would mean for EA as a movement to be “rational”, and most use of “rational” in the way you’re using it here reflects a pattern shared among folks with only passing familiarity with Less Wrong.
For example, you ask about “rational debate” and “rationally resolv[ing] disagreements”, but the point of the post I linked is sort of that this doesn’t make sense to ask for. People might debate using rational arguments, but it would be weird to call that rational debate since the debate itself is not the thing that is rational or not, but rather the thing that could be rational is the thought processes of the debaters.
Maybe this odd phrasing is why you got few responses, since it reads like a signal that you’ve failed to grasp a fundamental point of Less Wrong style rationality: that rationality is a method applied by agents, not an essential property something can have or not.
You raise multiple issues. Let’s go one at a time.
I didn’t write the words “rational dispute resolution”. I consider inaccurate quotes an important issue. This isn’t the first one I’ve seen, so I’m wondering if there’s a disagreement about norms.
I was just paraphrasing. You literally wrote “rationally resolve disagreements” which feels like the same thing to me as “rational dispute resolution”.
I edited my comment to quote you more literally since I think it maintains exactly the same semantic content.
We disagree about quotation norms. I believe this is important and I would be interested in discussing it. Would you be? We could both explain our norms (including beliefs about their importance or lack thereof) and try to understand the other person’s perspective.
I don’t know if we really disagree, but I’m not interested in talking about it. Seems extremely unlikely to be a discussion worth the effort to have since I don’t think either of us thinks making up deceptive quotes is okay. I think I’m just sloppier than you and that’s not interesting.
TLDR: We don’t have some easy to summarise methodology and being rational is pretty hard. Generally we try our best and hold ourselves and each other accountable and try to set up the community in a way that encourages rationality. If what you’re looking for is a list of techniques to be more rational yourself you could read this book of rationality advice or talk to people about why they prioritise what they do in a discussion group
Some meta stuff on why I think you got unsatisfactory answers to the other questions
I wouldn’t try to answer either of the previous questions because the answers seem big and definitely incomplete. I don’t have a quick summary for how I would resolve a disagreement with another EA because there are a bunch of overlapping techniques that can’t be described in a quick answer.
To put it into perspective I’d say the foundation to how I personally try to rationally approach EA is in the Rationality A-Z book but that probably doesn’t cover everything in my head and I definitely wouldn’t put it forward as a complete methodology for finding the truth. For a specific EA spin just talking to people about why they prioritise what they prioritise is what I’ve found most helpful and an easy way to do that is in EA discussion groups (in person is better than online).
It is pretty unfortunate that there isn’t some easy to summarise methodology or curriculum for applying rationality for charity current EA curricula are pretty focussed on just laying out our current best guess and using those examples along with discussion to demonstrate our methodology.
How is EA rational then?
I think the main thing happening in EA is that there is a strong personal, social, and financial incentive for people to approach their work “rationally”. E.g people in the community will expect you to have some reasoning which led you to do what you’re doing, and they’ll feedback on that reasoning if they think it’s missing an important consideration. From that spawns a bunch of people thinking about how to reason about this stuff more rationally, and we end up with a big set of techniques or concepts which seem to guide us better.
Trying to address only one thing at a time:
I don’t think I asked for an “easy to summarise methodology” and I’m unclear on where that idea is coming from.
I was responding mainly to the format. I don’t expect you to get complete answers to your earlier two questions because there’s a lot more rationality methodology in EA than can be expressed in the amount of time I expect someone to spend on an answer
If I had to put my finger on why I don’t feel like the failure to answer those questions is as concerning to me as it seems to be for you I’d say because.
A) Just because it’s hard to answer doesn’t mean EAs aren’t holding themselves and each other to a high epistemic standard
B) Something about perfect not being the enemy of good and about urgency of other work. I want humanity to have some good universal epistemic tools but currently I don’t have them and I don’t really have the option to wait to do good until I have them. So I’ll just focus on the best thing my flawed brain sees to work on at the moment (using what fuzzy technical tools it has but still being subject to bias) because I don’t have any other machinery to use
I could be wrong but my read from your comments on other answers is that we disagree most on B). E.g you think current EA work would be better directed if we were able to have a lot more formally rational discussions. To the point that EA work or priorities should be put on hold (or slowed down) until we can do this.
I think I disagree with you on both A and B, as well as some other things. Would you like to have a serious, high-effort discussion about it and try to reach a conclusion?