The Effective Altruism website defines EA as: “We use evidence and careful analysis to find the very best causes to work on.” The Introduction to Effective Altruism post in our forum also says: “It is a research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible.”
So I guess this is more or less considered the definition of EA. But as I read more about EA, I am beginning to feel like this definition may be insufficient. It looks like the EA focus splits across two schools of thought—Evidence-based giving and hits-based giving. But this definition seems like it is all about Evidence-based giving. It feels like the ‘GiveWell-ness’ of it all is represented but what about the ‘OpenPhil-ness’?
This exclusion of hits-based giving from the definition seems problematic since 80000hours.org (one of the top 5 ways through which people actually find EA) considers Expected Value thinking (the foundation of Hits Based giving if I understand it correctly) as one of the key ideas of EA. But then you see the definition and it is not really there. In addition, the incompleteness of the definition could also make it difficult for someone to see why EA does GCR work, in my opinion. Please correct me if I am wrong but it feels like GCRs doesn’t necessarily have high-quality evidence for why we should work on it but Expected Value thinking is what really makes it worth it.
UPDATE:
I had only mentioned two sources of definitions above. But there could be more that I may have missed. If you know of more please mention them in the comments/answers and I will add them to this list:
Defining Effective Altruism by William_MacAskill. Thanks to Davidmanheim for bringing this up in the answer here. The definition given in Will’s post is:
Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.
One, I’d argue that hits-based giving is a natural consequence of working through what using “high-quality evidence and careful reasoning to work out how to help others as much as possible” reallying means, since that statement doesn’t say anything about excluding high-variance strategies. For example, many would say there’s high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may not be able to measure that help for a long time and we may make mistakes.
Two, it’s likely a strategic choice to not be in-your-face about high variance giving strategies since they are pretty weird to most people. EA orgs have chosen to develop a public brand that is broadly appealing and not controversial on the surface (even if EA ends up courting controversy anyway because of its consequences for opportunities we judge to be relatively less effective than others). The definitions of EA you point to seem in line with this.
Googling, I primarily find the term “high-quality evidence” in association with randomised controlled trials. I think many would say there isn’t any high-quality evidence regarding, e.g. AI risk.
Agreed—see my answer which notes that Will suggested a phrasing that omits “high-quality.”
The point about “working through what it really means” is very interesting. (more on this below) But when I read, “high-quality evidence and careful reasoning”, it doesn’t really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn’t encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but “high-quality evidence” to my brain just says empirical evidence. Maybe that is why I am sensing this exclusion of high variance strategies.
You are probably right. But I am worried if that is really a good strategy? By not openly saying that we do things we are uncertain about we could end up coming off as a know-it-all who has it all figured out with evidence! There were some discussions along these lines in another recent post. Maybe having a definition that kind of gives a subtle nod to hits-based giving could help with that?
Your point about ‘working through the definition’ actually gave me an idea: What if we rephrased to “high-quality evidence and/or careful reasoning”. That non-standard phrasing of ‘and/or’ sows some curiosity to actually work things out, doesn’t it? I am making the assumption that the phrase “high-quality evidence” is empirical evidence (as I already said) and the phrase “careful reasoning” includes Expected Value thinking, making Fermi estimates and all the other reasoning tools that EAs use. Also, this small phrasing change is not that radically different from what we already have so the cost of changing shouldn’t be that high. Of course the question is, is it actually that much more effective than what we have. Would love to hear thoughts on that and of course other suggestions for a better definition...
First, I don’t think that’s the best “current” definition. More recently (2 years ago,) Will proposed the following:
But Will said he’s “making CEA’s definition a little more rigorous,” rather than replacing it. I think the key reason to allow hits-based giving in both cases is the word “and” in the phrase ”...evidence and careful reasoning.” (Note that Will omits “high quality” from evidence, I’d suspect for the reason you suggested. I would argue that for a Bayesian, high-quality evidence doesn’t require an RCT, but that’s not the colloquial usage, so I agree Will’s phrasing is less likely to mislead.)
And to be fair to the original definition, careful reasoning is exactly the justification for expected value thinking. Specifically, careful reasoning leads to favoring making 20 “hits based” donations to high-risk-of-failure potential causes, where in expectation 10% of them end up with a cost per QALY of $5, and the others end up useless, rather than a single donation 20x as large to an organization we are nearly certain has a cost per QALY of $200.
Thanks for bringing up Will’s post! I have now updated the question’s description to link to that.
I actually like Will’s definition more. The reason is two-fold:
Will’s definition adds a bit more mystery which makes me curious to actually work out what all the words mean. In fact, I would add this to the list of “principal desiderata for the definition” the post mentions: The definition should encourage people to think about EA a bit deeply. It should be a good starting point for research.
Will’s definition is not radically different from what is already there—the post says “little more rigorous”—which makes the cost of changing to this definition lesser. (One of the costs of changing something as fundamental as the definition could be giving the perception to the community that somehow there has been a significant change in the foundations of EA when hasn’t been any—we are just trying to better reflect what is actually done in EA).
One critique I have of Will’s alternative is that the proposed definition isn’t quite distinguishing the two schools of thought. To explain my thinking here is a bit more visual representation. Let () represent a bucket:
Will’s definition and existing definition makes things feel like - (Evidence, Careful reasoning) - it is just one bucket
But it should really feel like - (Evidence), (Careful reasoning) - two separate buckets
Apologies if that is too nitpicky but I don’t think it is. I think the distinctness of Evidence and Careful reasoning needs to come out.
I guess rephrasing it this way would be better: Effective altruism attempts to improve the world by the use of experimental evidence and/or theoretical reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms.This rephrasing is inspired by the fact that many of the natural sciences split into two—theory and experiment (like Theoretical Physics and Experimental Physics). We are saying EA is also that way which I think it is. I think this also adds to the Science-aligned point that Will mentions.(I have edited this to say that I don’t think this definition is a good one. See my next comment below)I actually disagree with your definition. Will’s definition allows for debate about what counts as evidence and careful reasoning, and whether hits based giving or focusing on RCTs is a better path. That ambiguity seems critical for capturing what EA is, a project still somewhat in flux and one that allows for refinement, rather than claiming there are 2 specific different things.
A concrete example* of why we should be OK with leaving things ambiguous is considering ideas like the mathematical universe hypothesis (MUH). Someone can ask; “Should the MUH be considered as a potential path towards non-causal trade with other universes?” Is that question part of EA? I think there’s a case to make that the answer is yes (in my view correctly,) because it is relevant to the question of revisiting the “tentatively understanding” part of Will’s definition.
*In the strangest sense of “concrete” I think I’ve ever used.
I both agree and disagree with you.
Agreements:
I agree that the ambiguity in whether giving in a hits-based way or evidence-based way is better, is an important aspect of current EA understanding. In fact, I think this could be a potential 4th point (I mentioned a third one earlier) to add to the definition desiderata: The definition should hint at the uncertainty that is in current EA understanding.
I also agree that my definition doesn’t bring out this ambiguity. I am afraid it might even be doing the opposite! The general consensus is that both experimental & theoretical parts of the natural sciences are equally important and must be done. But I guess EAs are actually unsure if the evidence-based giving & careful reasoning-based giving (hits based) should both be done or if we would be doing more good by just focussing on one. I should possibly read up more on this. (I would appreciate it if any of you can DM me any resources you have found on this) I just assumed EAs believed both must be done. My bad!
Disagreement: I don’t see how Will’s definition allows for debating said ambiguity though. As I mentioned in my earlier comment, I don’t think that the definition distinguishes between the two schools of thought enough. As a consequence, I also don’t think it shows the ambiguity between them. I believe a conflict(aka ambiguity) requires at least two things but the definition actually doesn’t convincingly show there are two things in the first place, in my opinion.
I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:
I don’t think this is an “official definition” (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.
Thanks for linking to the podcast! I hadn’t listened to this one before and ended up listening to the whole thing and learnt quite a bit.
I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.
Yeah, I agree. I don’t have anything in mind as such. I think only Ben can answer this :P