“Big tent” effective altruism is very important (particularly right now)
[Note: [1]Big tent refers to a group that encourages “a broad spectrum of views among its members”. This is not a post arguing for “fast growth” of highly-engaged EAs (HEAs) but rather a recommendation that as we inevitably get more exposure we try to represent and cultivate our diversity while ensuring we present EA as a question.]
This August, when Will MacAskill launches What We Owe The Future, we will see a spike of interest in longtermism and effective altruism more broadly. People will form their first impressions – these will be hard to shake.
After hearing of these ideas for the first time, they will be wondering things like:
Who are these people? (Can I trust them? Are they like me? Do they have an ulterior agenda?)
What can I do (literally right now and also how it might impact my decisions over time)?
What does this all mean for me and my life?
If we’re lucky, they’ll investigate these questions. The answers they get matter (and so does their experience finding those answers).
I get the sense that effective altruism is at a crossroads right now. We can either become a movement of people who appear dedicated to a particular set of conclusions about the world, or we can become a movement of people that appear united by a shared commitment to using reason and evidence to do the most good we can.
In the former case, I expect to become a much smaller group, easier to coordinate our focus, but it’s also a group that’s more easily dismissed. People might see us as a bunch of nerds[2] who have read too many philosophy papers[3] and who are out of touch with the real world.
In the latter case, I’d expect to become a much bigger group. I’ll admit that it’s also a group that’s harder to organise (people are coming at the problem from different angles and with varying levels of knowledge). However, if we are to have the impact we want: I’d bet on the latter option.
I don’t believe we can – nor should – simply tinker on the margins forever nor try to act as a “shadowy cabal”. As we grow, we will start pushing for bigger and more significant changes, and people will notice. We’ve already seen this with the increased media coverage of things like political campaigns[4] and prominent people that are seen to be EA-adjacent[5].
A lot of these first impressions we won’t be able to control. But we can try to spread good memes about EA (inspiring and accurate ones), and we do have some level of control about what happens when people show up at our “shop fronts” (e.g. prominent organisations, local and university groups, conferences etc.).
I recently had a pretty disheartening exchange where I heard from a new GWWC member who’d started to help run a local group felt “discouraged and embarrassed” at an EAGx conference. They left feeling like they weren’t earning enough to be “earning to give” and that they didn’t belong in the community if they’re not doing direct work (or don’t have an immediate plan to drop everything and change). They said this “poisoned” their interest in EA.
Experiences like this aren’t always easy to prevent, but it’s worth trying.
We are aware that we are one of the “shop fronts” at Giving What We Can. So we’re currently thinking about how we represent worldview diversity within effective giving and what options we present to first-time donors. Some examples:
We’re focusing on providing easily legible options (e.g. larger organisations with an understandable mission and strong track[6] record instead of more speculative small grants that foundations better make) and easier decisions (e.g. “I want to help people now” or “I want to help future generations”).
We’re also cautious about how we talk about The Giving What We Can Pledge to ensure that it’s framed as an invitation for those who want it and not an admonition of those for whom it’s not the right fit.
We’re working to ensure that people who first come across EA via effective giving can find their way to the actions that best fit them (e.g. by introducing them to the broader EA community).
We often cross-promote careers as a way of doing good, but we’re careful to do so in a way that doesn’t diminish those who aren’t in a position to switch careers and would leave someone feeling good about their donations.
These are just small ways to make effective altruism more accessible and appealing to a wider audience.
Even if we were just trying to reach a small number of highly-skilled individuals, we don’t want to make life difficult for them by having effective altruism (or longtermism) seem too weird to their family or friends (people are less likely to take actions when they don’t feel supported by their immediate community). Even better, we want people’s interest in these ideas and actions they take to spur more positive actions by those in their lives.
I believe we need the kind of effective altruism where:
A university student says to their parents they’re doing a fellowship on AI policy because their effective altruism group told them it’d be a good fit; their parents Google “effective altruism” and end up thrilled[7] (so much that they end up donating).
A consultant tells their spouse they’re donating to safeguard the long-term future; their spouse looks into it and realises that their skills in marketing and communications would be needed for Charity Entrepreneurship and applies for a role.
A rabbi gets their congregation involved in the Jewish Giving Initiative; one of them goes on to take the Giving What We Can Pledge; they read the newsletter and start telling their friends who care about climate change about the Founders Pledge Climate Fund.
A workplace hosts a talk on effective giving, many people donate to a range of high-impact causes, and one of them goes “down the rabbit hole”, and their subsequent career shift is into direct work.
A journalist is covering a local election and sees that one of the candidates has an affiliation with effective altruism; they understand the merits and write favourably about the candidate while communicating carefully the importance of the positions they are putting forward.
Many paths to effective altruism. Many positive actions taken.
For this to work, I think we need to:
Be extra vigilant to ensure that effective altruism remains a “big tent”.
Remain committed to EA as a question.
Remain committed to worldview diversification.
Make sure that it is easy for people to get involved and take action.
Celebrate all the good actions[8] that people are taking (not diminish people when they don’t go from 0 to 100 in under 10 seconds flat).
Communicate our ideas both in high fidelity while remaining brief and to the point (be careful of the memes we spread).
Avoid coming across as dogmatic, elitist, or out-of-touch.
Work towards clear, tangible wins that we can point to.
Keep trying to have difficult conversations without resorting to tribalism.
Try to empathise with the variety of perspectives people will come with when they come across our ideas and community.
Develop subfields intentionally, but don’t brand them as “effective altruism.”
Keep the focus on effective altruism as a question, not a dogma.
I’m not saying that “anything goes”, and we should drop our standards and not be bold enough to make strong and unintuitive claims. I think we must continue to be truth-seeking to develop a shared understanding of the world and what we should do to improve it. But I think we need to keep our minds open to the fact that we’re going to be wrong about a lot of things, new people will bring helpful new perspectives, and we want to have the type of effective altruism that attracts many people who have a variety of things to bring to the table.
- ^
After reading several comments I think that I could have done better by defining “big tent” at the beginning so I added this definition and clarification after this was posted.
- ^
I wear the nerd label proudly
- ^
And love me some philosophy papers
- ^
- ^
e.g. Despite it being a stretch: this
- ^
We are aware that this is often fungible with larger donors but we think that’s okay for reasons we will get into in future posts. We also expect that the type of donor who’s interested in fungibility is a great person to get more involved in direct work so we are working to ensure that these deeper concepts are still presented to donors and have a path for people to go “down the rabbit hole”.
- ^
As opposed to concerned as I’ve heard people share that their family or friends are worried about their involvement after looking into it.
- ^
Even beyond our typical recommendations. I’ve been thinking about “everyday altruism” having a presence within our local EA group (e.g. such as giving blood together, volunteering to teach ethics in schools, helping people to get to voting booths etc) – not skewing too much too this way, but having some presence could be good. As we’ve seen with Carrick’s campaign, doing some legible good within your community is something that outsiders will look for and will judge you on. Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.
- We can all help solve funding constraints. What stops us? by 18 Jun 2023 23:30 UTC; 381 points) (
- EA career guide for people from LMICs by 15 Dec 2022 14:37 UTC; 252 points) (
- Should the EA community be cause-first or member-first? by 29 May 2023 15:50 UTC; 206 points) (
- We’re still (extremely) funding constrained (but don’t let fear of getting funding stop you trying). by 16 Sep 2022 4:43 UTC; 183 points) (
- Making Effective Altruism Enormous by 24 Jul 2022 13:29 UTC; 170 points) (
- How EA is perceived is crucial to its future trajectory by 23 Jul 2022 3:24 UTC; 162 points) (
- General support for “General EA” by 26 Jul 2023 21:37 UTC; 156 points) (
- AMA: Luke Freeman, ED of Giving What We Can (GWWC) by 13 Jun 2023 11:16 UTC; 110 points) (
- Ask (Everyone) Anything — “EA 101” by 5 Oct 2022 10:17 UTC; 110 points) (
- Red-teaming contest: demographics and power structures in EA by 31 Aug 2022 4:36 UTC; 110 points) (
- Taking prioritisation within ‘EA’ seriously by 18 Aug 2023 17:50 UTC; 102 points) (
- EA’s Culture and Thinking are Severely Limiting its Impact by 26 Jul 2022 11:10 UTC; 96 points) (
- How EA can be better at communications by 2 Mar 2024 18:49 UTC; 95 points) (
- EA movement course corrections and where you might disagree by 29 Oct 2022 3:32 UTC; 79 points) (
- EA may look like a cult (and it’s not just optics) by 1 Oct 2022 13:07 UTC; 77 points) (
- EA is becoming increasingly inaccessible, at the worst possible time by 22 Jul 2022 15:40 UTC; 77 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- High Impact Medicine, 6 months later - Update & Key Lessons by 28 May 2022 15:24 UTC; 59 points) (
- Let’s not glorify people for how they look. by 11 Aug 2022 14:33 UTC; 56 points) (
- Paths to Impact for EA Working Professionals by 9 Jun 2022 15:33 UTC; 55 points) (
- EA Switzerland—Strategy (v2024) by 22 Jul 2024 9:16 UTC; 44 points) (
- Is it possible for EA to remain nuanced and be more welcoming to newcomers? A distinction for discussions on topics like this one. by 15 Jul 2022 7:03 UTC; 35 points) (
- EA Groups Should Rename Intro Fellowships for Better Clarity by 7 Nov 2022 15:38 UTC; 26 points) (
- (Re)considering the Aesthetics of EA by 20 May 2022 15:01 UTC; 24 points) (
- 16 Mar 2023 3:51 UTC; 22 points) 's comment on FTX Community Response Survey Results by (
- 15 Sep 2023 11:26 UTC; 21 points) 's comment on James Herbert’s Quick takes by (
- 24 Jul 2023 14:11 UTC; 21 points) 's comment on EA Survey 2022: How People Get Involved in EA by (
- Is the 10% Giving What We Can Pledge Core to EA’s Reputation? by 6 Jun 2023 5:42 UTC; 16 points) (
- What a Large and Welcoming EA Could Accomplish by 22 Aug 2022 12:17 UTC; 12 points) (
- 24 May 2022 23:05 UTC; 10 points) 's comment on The EA movement’s values are drifting. You’re allowed to stay put. by (
- Compounding assumptions and what it mean to be altruistic by 1 Sep 2022 7:34 UTC; 9 points) (
- Is the 10% Giving What We Can Pledge Core to EA’s Reputation? by 6 Jun 2023 6:21 UTC; 9 points) (LessWrong;
- The Wages of North-Atlantic Bias by 19 Aug 2022 12:34 UTC; 8 points) (
- 24 Jul 2023 14:18 UTC; 8 points) 's comment on EA Survey 2022: How People Get Involved in EA by (
- 7 Jun 2022 11:41 UTC; 6 points) 's comment on Four Concerns Regarding Longtermism by (
- 16 Jul 2022 11:54 UTC; 6 points) 's comment on My local EA group has an unfriendly and impersonal vibe (via r/EffectiveAltruism) by (
- How Should EA Govern Itself? by 23 Jan 2023 3:19 UTC; 4 points) (
- 4 Sep 2022 20:48 UTC; 4 points) 's comment on EA is about maximization, and maximization is perilous by (
- Street Outreach by 6 Jun 2022 13:40 UTC; 4 points) (
- 19 Aug 2022 20:02 UTC; 2 points) 's comment on Rhodri Davies on why he’s not an EA by (
- 5 Jun 2022 16:38 UTC; 1 point) 's comment on Sophia’s Quick takes by (
- Introducing For Future—A Platform to Discover and Collaborate on Longtermist Solutions by 5 Oct 2023 12:54 UTC; 0 points) (
There’s value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights—it’s easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don’t address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.
This is separate from any concern about whether it’s better for EA to be a large or small movement.
Edit: big tent actually means “encompassing a broad spectrum of views”, not “big movement”. I now think this section has some relevance to the OP but does not centrally address the above point.
As I understand it, this means spending more resources on people who are “less elite” and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to “big tent” efforts, it would mean less field-building at top-20 universities (Cambridge AGISF), less highly scalable top-funnel (80,000 Hours), less workshops for people who are committed to career changes and get huge speedups from workshops.
One could still make a neglectedness case for big-tent efforts, but the cost-benefit calculation definitely can’t be summed up in one line.
I’m uncomfortable doing too much celebrating of actions that are much lower impact than other actions (e.g. donating blood), from both an honesty/transparency perspective and a consequentialist perspective. From a consequentialist perspective, we should probably celebrate actions that create a lot of expected impact in order to encourage people to take those actions. So the relevant question is whether donating blood makes one closer to having a very high-impact career. I think the answer is often no: it often doesn’t practice careful scope-sensitive thinking, or bring high-impact actions into one’s action space.
From a transparency perspective, celebration disproportionate to the good done also feels kind of fake. In the extreme, we’re basically distorting our impressions of people’s actions to get people to join a movement. I’m not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren’t very good and don’t deserve celebration, and (b) some actions are good but only because they’re on the path to an impactful career.
Communication is hard. There’s a tradeoff between fidelity, brevity, scale, and speed (time spent writing/editing/talking to distill 1 idea):
Long one-on-ones get very high fidelity, low brevity, low scale, and high speed
80k podcasts are high fidelity, low brevity, high scale, and low speed
A tabling pitch is low fidelity, high brevity, moderate scale, and moderate speed
A short, polished EA forum post is moderate fidelity, high brevity, high scale, and very low speed. If you’re not a gifted writer it takes multiple editing cycles to create a really high-quality post. Usually this includes copy-editing, sending the Google Doc draft to friends, having discussions in the comments, maybe adding visuals.
If we max out fidelity and brevity, we have to have lower scale and/or speed. I think this is okay if we’re targeting communication, but it doesn’t play well with the big-tent approach where we also need high scale. One could say we should just get closer to the Pareto frontier, but I think everyone is already trying to do this.
I don’t strongly disagree with this—it’s bad to put off people unnecessarily—but I think it can easily be taken too far.
I’m worried that people will avoid looking dogmatic by adding unwarranted uncertainty about what actions are best, and in particular being unwilling to reject popular ideas. I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty. (This is related to the post “PR is corrosive; “reputation” is not.) When someone asks whether volunteering in an animal shelter is high-impact, we should give well-reasoned arguments that there are probably higher-value things to do under almost every scope-sensitive moral view (perhaps starting from first principles if they’re new), not avoid looking dogmatic by telling them something largely false like “Some people might find higher impact at an animal shelter because they have comparative advantage / are much more motivated, and there could also be unknown unknowns that place really high value on the work at animal shelters”. It’s impossible to spend 1% of our resources on every idea with as much true merit as volunteering at animal shelters because there are more than 100 such ideas, so we only would because of bias towards popular things. But when we require a well-reasoned case using the ITN framework to allocate 1% of our effort to a problem, and therefore refuse to spend 1% of our effort on animal shelters, plastic bag bans, or the NYC homelessness problem, we will come off as dogmatic to some people. OP addresses the need to protect our epistemics at the end, but I think doesn’t stress this enough.
There are also many crucial EA things that sound or are elitist.
More resources are focused on top universities than community colleges (because talent is concentrated there and this ultimately helps the most sentient beings).
Over 80% of EA funding is from billionaires.
People are flown across the world to retreats (because this is often the most efficient way to network or learn, and we think their time can do more good than spending the money on anything else).
We are looking for people who produce 1000x the impact as others (because they have more multipliers available).
We shouldn’t be exclusionary for no reason when talking to new people. But based on community-building at two universities, ~10 retreats/EAGs, much of the reason EA looks elitist is not because we’re exclusionary for no reason, it’s because EAs do important things that look elitist.
Maybe the most elitist-sounding practices should even be slightly reduced for PR reasons. But going further to reduce the appearance of elitism would hamstring EA by taking away some of the most valuable direct and meta interventions.
--
I think the following things can both be true:
The best actions are much higher impact than others and should be heavily encouraged.
Most people will come in on easier but lower impact actions and if there isn’t an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if ‘entry level’ actions either aren’t available or receive a very low reward or status.
I didn’t read the OP as saying that we should settle with lower impact actions if there’s the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level—either helping them to reach higher impact over time if for whatever reason they’re unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that’s what’s available.
Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I’ve definitely noticed the ‘0-100’ thing and if I was younger and less experienced it might have bothered me more.
Thanks Rob. I think you just made my point better than me! 😀
Thanks for your response. I tend to actually agree with a lot (but not all) of these points, so I totally own that some of this just needs clarification that wouldn’t be the case if I were clearer in my original post.
There’s a difference between actively recruiting from “less elite” sources and being carefully about your shopfronts so that they don’t put-off would-be effective altruists and create enemies of could-be allies. I’m pointing much more to the latter than the former (though I do think there’s value in the former too).
I’m mostly saying we shouldn’t shun people for taking a suboptimal action. But also, be careful about how confident we are about what is suboptimal or not. And use to use positive reinforcement instead of good actions instead of guilting people for not reaching a particular standard. To recognise that we’re all on a journey and the destination isn’t always that clear anyway (Rob Wiblin thought it might not be a good idea for SBF to earn to give and I think that encouraging him to become a grantmaker at Open Philanthropy probably would have been a worse outcome).
Side note: There’s something pretty off-putting about treating the actions of altruistic people as purely a means to getting them into a particular predestined career. I think we lose good people when we treat them this way. We can seem like slimey salespeople.
Again this is where you have different focuses in different places. Our shopfronts (e.g. effectivealtruism.org, fellowships, virtual programs, introductory presentations, personal interactions with community members and group leaders etc) start brief and concise with a clear path to dig deeper.
I think this is a central confusion with my post and I own I must not have communicated this well: big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.
I agree! The former is a great response, the latter is not. I’d also say something along the lines of “you can have multiple goals and that’s fine” and that if the warm fuzzies is important and motivating for you then that’s great. I wouldn’t encourage someone to say it’s “EA” if it isn’t.
Great! That’s one of my main points.
I agree! I think we should just be judicious about it and bear in mind both (a) how perception of elitism can hurt us; and (b) when we miss out on great people because of unnecessary elitism that results in us achieving a lot less.
Thanks, this clears up a lot for me.
Great! I definitely should have defined that up front!
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn’t realise were important because we were overconfident in some problems/solutions, then that’s quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I’ll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there’s some truth to that, but I think there’s lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are ‘most’ important and the ‘best’ approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here—is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.
On the question of whether resource diversion from talented people to less ‘talented’ people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I’d say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I’d say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni—it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we’d be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.
There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It’s not obvious to me why that would need to be sacrificed to have a bigger tent—but maybe we have different ideas of what a bigger tent looks like.
(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I’d bet you’d find the opposite point of view being genuinely argued for around this forum or LW somewhere ).
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
Thanks, I made an edit to weaken the wording.
I mostly wanted to point out a few characteristics of applause lights that I thought matched:
the proposed actions are easier to cheer for on a superficial level
arguing for the opposite is difficult, even if it might be correct: “Avoid coming across as dogmatic, elitist, or out-of-touch.” inverts to “be okay with coming across as dogmatic, elitsit, or out-of-touch”
when you try to put them into practice, the easy changes you can make don’t address fundamental difficulties, and making sweeping changes has high cost
Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn’t mean to make.
Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.
I think this is unhelpfully conflating at least three pretty different concepts.
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
I guess my personal read here is that I don’t think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption.
Yeah I just couldn’t understand his comment until I realised that he’d misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn’t deter great people for having different views. So I was looking for an explanation and that’s what my brain came up with.
Thank you, that makes sense!
First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply [3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
maybe this whole comment should be a reply to Luke’s reply but moving this comment is a tad annoying so hopefully it is forgivable to leave it here 🌞.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
🌞
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.
This is a great sentence, I will be stealing it :)
However, I think “having good legible epistemics” being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.
I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I’ve found anecdotally is that a sort of “friendly transparency” works pretty well for this—just be up front about what you believe and why, don’t try to hide ideas that might scare people off, be open about the optics on things, ways you’re worried they might come across badly, and why those bad impressions are misleading, etc.
Thanks for this post, Luke!
This touches on many of my personal fears about the community in the moment.
I sincerely hope that anyone who comes across our community with the desire and intent to participate in the project of effective altruism feels that they are welcome and celebrated, whether that looks like volunteering an hour each month, donating whatever they feel they can afford, or doing direct work.
To lose people who have diverse worldviews, abilities and backgrounds would be a shame, and could potentially limit the impact of the community. I’d like to see an increasingly diverse effective altruism community, all bound by seeking to do as much good as we can.
The call to action here resonates—feels really important and true to me, and I was just thinking yesterday about the same problem.
The way I would frame it is this:
The core of EA, what drives all of us together, is not the conclusions (focus on long term! AI!) -- it’s the thought process and principles. Although EA’s conclusions are exciting and headline-worthy, pushing them without pushing the process feels to me like it risks hollowing out an important core and turning EA into (more of) a cult, rather than a discipline.
Edit to add re. “celebrate the process”—A bunch of people have critiqued you for pushing “celebrate all the good actions” since it risks diluting the power of our conclusions, but I think if we frame it as “celebrate and demonstrate the EA process” then that aligns with the point I’m trying to make, and I think works.
Thanks! I really like your framing of both these 😀
Thank you for this post! I’m a loud-and-proud advocate of the “big tent”. It’s partly selfish, because I don’t have the markers that would make me EA Elite (like multiple Oxbridge degrees or a gazillion dollars).
What I do have is a persistent desire to steadily hack away at the tremendous amount of suffering in the world, and a solid set of interpersonal skills. So I show up and I make my donations and I do my level best to encourage/uplift/motivate the other folks who might feel the way that I do. If the tent weren’t big, I wouldn’t be here, and I think that would be a loss.
Your new GWWC member’s EAGx experience is exactly what I’m out here trying to prevent. Here is someone who was interested/engaged enough to go to a conference, and—we’ve lost them. What a waste! Just a little more care could have helped that person come away willing to continue to engage with EA—or at least not have a negative view of it.
There are lots of folks out there who are working hard on “narrow tower” EA. Hooray for them—they are driving the forward motion of the movement and achieving amazing things. But in my view, we also need the “big tent” folks to make sure the movement stays accessible.
After all, “How can I do the most good, with the resources available to me?” is a question more—certainly not fewer! - people should be encouraged to ask.
I’m aware that this is not exactly the central thrust of the piece, but I’d be interested if you could expand on why we might expect the former to be a smaller group than the latter.
I agree that a “commitment to using reason and evidence to do the most good we can” is a much better target to aim for than “dedicated to a particular set of conclusions about the world”. However, my sense is that historically there have been many large and rapidly growing groups of people that fit the second description, and not very many of the first. I think this was true for mechanistic reasons related to how humans work rather than being accidents of history, and think that recent technological advances may even have exaggerated the effects.
+1 to this.
In fact, I think that it’s harder to get a very big (or very fast-growing) set of people to do the “reason and evidence” thing well. I think that reasoning carefully is very hard, and building a community that reasons well together is very hard.
I am very keen for EA to be about the “reason and evidence” thing, rather than about specific answers. But in order to do this, I think that we need to grow cautiously (maybe around 30%/year) and in a pretty thoughtful way.
I agree with this. I think it’s even harder to build a community that reasons well together when we come across dogmatically (and we risk cultivating an echo chamber).
Note: I do want to applaud a lot of recent work that CEA-core team are doing to avoid this, the updates to effectivealtruism.org for example have helped!.
A couple of things here:
Firstly, 30% /year is pretty damn fast by most standards!
Secondly, I agree that being thoughtful is essential (that’s a key part of my central claim!).
Thirdly, some of the rate of growth is within “our” control (e.g. CEA can control how much it invests in certain community building activities). However, a lot of things aren’t. People are noticing as we ramp up activities labelled EA or even losely associated with EA.
For example, to avoid growing faster than 30% /year should someone say to Will and the team promoting WWOTF to not pull back on the promotion? What about to SBF to not support more candidates or scaling up FTX Future Fund? Should we not promote EA to new donors/GWWC members? Should GiveWell stop scaling up?
If anything associated with EA grows, it’ll trickle through to more people discovering it.
I think we need to expect that it’s not entirely within our control and to act thoughtfully in light of this.
Agree that echo chamber/dogmatism is also a major barrier to epistemics!
“30% seems high by normal standards”—yep, I guess so. But I’m excited about things like GWWC trying to grow much faster than 30%, and I think that’s possible.
Agree it’s not fully within our control, and that we might not yet be hitting 30%. I think that if we’re hitting >35% annual growth, I would begin to favour cutting back on certain sorts of outreach efforts or doing things like increasing the bar for EAG. I wouldn’t want GW/GWWC to slow down, but I would want you to begin to point fewer people to EA (at least temporarily, so that we can manage the growth). [Off the cuff take, maybe I’d change my mind on further reflection.]
Are there estimates about current or previous growth rates?
There are some, e.g. here.
I think that works for many groups, and many subfields/related causes, but not for “effective altruism”.
To unpack this a bit, I think that “AI safety” or “animal welfare” movements could quite possibly get much bigger much more quickly than an “effective altruism” movement that is “commitment to using reason and evidence to do the most good we can”.
However, when we are selling that we’re “commitment to using reason and evidence to do the most good we can” and instead present people with a very narrow set of conclusions I think we do neither of these things well. Instead we put people off and we undermine our value.
I believe that the value of the EA movement comes from this commitment to using reason and evidence to do the most good we can.
People are hearing about EA. These people could become allies or members of the community and/or our causes. However, if we present ourselves too narrowly we might not just lose them, but they might become adversaries.
I’ve seen this already. People soured on EA because if it seeming too narrow and too overconfident becoming increasingly adversarial and that hurting our overall goals of improving the world.
I agree! That’s why I’m surprised by the initial claim in the article, which seems to be saying that we’re more likely to be a smaller group if we become ideologically committed to certain object-level conclusions, and a larger group if we instead stay focused on having good epistemics and seeing where that takes us. It seems like the two should be flipped?
Sorry if the remainder of the comment didn’t communicate this clearly enough:
I think the “bait and switch” of EA (sell the “EA is a question” but seem to deliver “EA is these specific conclusions”) is self-limiting for our total impact. This is self-limiting because:
It limits the size of our community (put off people who see it as a bait and switch)
It limits the quality of the community (groupthink, echo chambers, overfishing small ponds etc)
We lose allies
We create enemies
Impact is a product of: size (community + allies) * quality (community + allies) - actions of enemies actively working against us.
If we decrease size and quality of community and allies while increasing the size and veracity of people working against us then we limit our impact.
Does that help clarify?
A core part of the differing intuitions might be because we’re thinking about two different timescales.
It seems intuitively right to me that the “dedicated to a particular set of conclusions about the world” version of effective altruism will grow faster in the short term. I think this might be because conclusions require less nuanced communication, and being more concrete there are more concrete actions to take that can get people on board faster.
I also have the intuition that a “commitment to using reason and evidence to do the most good we can” (I’d maybe add, “with some proportion of our resources”) has the potential to have a larger backing in the long-term.
I have done a terrible “paint” job (literally used paint) in purple on one of the diagrams in this post to illustrate what I mean:
There are movement building strategies that end us up on the grey line, which gives us faster growth in the short term (so a bigger tent for a while), but doesn’t change our saturation point (we’re still at saturation point 1).
I think that a “broad spectrum of ideas” might mean our end saturation point is higher even if this might require slower growth in the near term. I’ve illustrated this as the purple line which ends up being bigger in the end, at saturation point 2, even if in the short term, growth is slower. In this sense, we will be smaller tent for a while, but we have the potential to end up as a bigger tent in some terminal equilibrium.
An example of a “movement” that had a vaguer, bigger picture idea that got so big it was too commonplace to be a movement might be “the scientific method”?
I think “large groups that reason together on how to achieve some shared values” is something that’s so common, that we ignore it. Examples can be democratic countries, cities, communities.
Not that this means reasoning about being effective can attract as large a group. But one can hope.
I both relatively strongly agree and strongly disagree with this post. Apologies that my points contradict one another:
Agreement:
Yes, community vibes feel weird right now. And I think in the run up to WWOTF they will only get weirder
Yes, we should be gracious to people who do small things. For me, being an EA is about being more effective or more altruistic with even $10 a month.
Disagreement:
I reckon it’s better if we focus on being a smaller highly engaged community rather than a really big one. I still think there should be actual research on this, but so far, much of the impact (SBF, Moskovitz funding GiveWell charities, direct work) has been from very engaged people. I find it compelling that we want similar levels of engagement in future. Do low engagement people become high engagement. I don’t know. I don’t emotionally enjoy this conclusion, but I can’t say it’s wrong, even though it clashes with the bullet point I made above.
GWWC is clearly a mass movement kind of organisation. I guess they should say, you might want to check out effective altruism, but it’s not necessary.
I don’t think that EA is for everyone. Again this clashes with what i said above, but I think that it can be harder for people who leave a community after some time than those who are rejected at the door. If my above point is correct, then there should be some way to signal to people that EA is for people who want to really engage and that it may not be for everyone
Synthesis
I suggest a wider movement being created around effective giving, perhaps reaching religious groups. This seems like the real “mass movement” etc
I would like research on if being smaller and higher engaged or not is better
Be welcoming to new people, gracious to poeple whatever they are doing, but signal that EAGs are mainly for those who are engaged. Anyone can come to events and feel welcome, but there is a desire for more engagement and that may not fit everyone.
I’m worried this will be controversial and I think i could have worded it better, but I think it’s better to say something clear and maybe wrong than vague. I may make edits and explain why.
Thanks Nathan. I definitely see the tensions here. Hopefully these clarifications will help :)
My central claim isn’t about the size of the community, it’s about the diversity of EA that we present to the world (and represent within EA) and staying true to the core question not a particular set of conclusions.
It depends on what you mean by “focus” too. The community will always be some degree of concentric circles of engagement. The total size and relative distribution of engagement will vary depending on what we focus on. My central claim is that the total impact of the community will be higher if the community remains a “big tent” that sticks to the core question of EA. The mechanism is that we create more engagement within each level of engagement, with more allies and fewer adversaries.
I’ve never seen someone become high engagement instantly. I’ve only seen engagement as something that increases incrementally (sometimes fast, sometimes slow, sometimes hit’s a point and tapers off, and sadly sometimes high engagement turns to high anti-engagement).
Depends on what you mean by EA. In my conception (and the conception I advocate for) everyone is an effective altruist to some extent sometimes and nobody is entirely an effective altruist ever. Effective altruism is a way of thinking not an identity. Some people are part of the “EA community” while some people eschew the label and community yet have much higher impact than most people within the “EA community” because they’ve interrogated big world problems and taken significant positive actions.
Why not both? Have a big tent with less-engaged people, and a core of more-engaged people.
Also, a lot of people donating small amounts can add up to big amounts.
Agree on both points. I think the concentric circles model still holds well. “Big tent” still applies at each level of engagement though. The best critics in the core will be those who still feel comfortable in the core while disagreeing with lots of people. I highly value people who are at a similar level of engagement but hold very different views to me as they make the best critics.
What is WWOTF?
Agreed, though it makes sense for Giving What We Can to become a mass movement. I think it’d be good for some people involved in GWWC to join EA, but there’s no need to push it too hard. More like let people know about EA and if it resonates with people they’ll come over.
Maybe, I think there’s scope for people to become more engaged over time.
“What We Owe the Future”, Will MacAskill’s new book.
I think there are two ways to frame an expansion of the group of people who are engaged with EA through more than donations.
The first, which sits well with your disagreements: we’re doing extremely important things which we got into by careful reasoning about our values and impact. More people may cause value drift or dilute the more impactful efforts to make way on the most important problems.
But I think a second one is much more plausible: we’re almost surely wrong about some important things. We have biases that stem from who the typical EAs are, where they live, or just the very noisy path that EA has taken so far. While our current work is important, it’s also crucial that our ideas are exposed to, and processed by, more people. What’s “value drift” in one person’s eyes might really be an important correction in another’s. What’s “dilution” may actually prove to mean a host of new useful perspectives and ideas (among other less useful ones).
Thanks for writing this up Luke! I think you’re pointing to some important issues. I also think you and the GWWC team are doing excellent work—I’m really excited to see more people introduced to effective giving!
[Edit to add: Despite my comment below, I still am taking in the datapoints and perspectives that Luke is sharing, and I agree with many of his recommendations. I don’t want to go into all of the sub-debates below because I’m focused on other priorities right now (including working on some of the issues Luke raises!).]
However, I worry that you’re conflating a few pretty different dimensions, so I downvoted this post.
Here are some things that I think you’re pointing to:
“Particular set of conclusions” vs. “commitment to using evidence and reasoning”
Size of the community, which we could in turn split into
Rate of growth of the community
Eventual size of the community
How welcoming we should be/how diverse
[I think you could split this up further.]
In what circumstances, and to what degree, there should be encouragement/pressure to take certain actions, versus just presenting people with options.
How much we should focus on clearly communicating EA to people who aren’t yet heavily involved.
This matters because you’re sometimes then conflating these dimensions in ways that seem wrong to me (e.g. you say that it’s easier to get big with the “evidence and reasoning” framing, but I think the opposite).
I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).
I think I naturally interpret a downvote as meaning “I think this post/comment isn’t helpful and I generally want to discourage posts/comments like it.” That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit concerned about the CEO of CEA seeming to discourage posts which suggest EA should be doing things differently, especially where they are reasonable and constructive like this one.
This is a minor point in some ways but I think explicitly stating “I downvoted this post” can say quite a lot (especially when coming from someone with a senior position in the community). I haven’t spent a lot of time on this forum recently so I’m wondering if other people think the norms around up/downvoting are different to my interpretation, and in particular whether Max you meant to use it differently?
[EDIT: I checked the norms on up/downvoting, which say to downvote if either “There’s an error”, or “The comment or post didn’t add to the conversation, and maybe actually distracted.” I personally think this post added something useful to the conversation about the scope and focus of EA, and it seems harsh to downvote it because it conflated a few different dimensions—and that’s why Max’s comment seemed a bit harsh/dismissive to me]
I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.
Something I’ve seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited).
Some people really hate anonymous downvotes. I’ve heard multiple suggestions that we remove anonymity from votes, or require people to input a reason before downvoting (which is then presumably sent to the author), or just establish an informal culture where downvotes are expected to come with comments.
So I don’t think Max was necessarily being impolite here, especially since he and Luke are colleagues who know each other well. Instead, he was doing something that some people want a lot more of and other people don’t want at all. This seems like a matter of competing access needs (different people wanting different things from a shared resource).
In the end, I think it’s down to individual users to take their best guess at whether saying “I downvoted” or “I upvoted” would be helpful in a given case. And I’m still not sure whether having more such comments would be a net positive — probably depends on circumstance.
***
Max having a senior position in the community is also a complicated thing. On the one hand, there’s a risk that anything he says will be taken very seriously and lead to reactions he wouldn’t want. On the other hand, it seems good for leaders to share their honest opinions on public platforms (rather than doing everything via DM or deliberately softening their views).
There are still ways to write better or worse comments, but I thought Max’s was reasonable given the balancing act he’s trying to do (and the massive support Luke’s post had gotten already — I’d feel differently if Max had been joining a pile-on or something).
I think the problem isn’t with saying you downvoted a post and why (I personally share the view that people should aim to explain their downvotes).
The problem is the actual reason:
The message that, for me, stands out from this is “If you have an important idea but can’t present it perfectly—it’s better not to write at all.” Which I think most of us would not endorse.
I didn’t get that message at all. If someone tells me they downvoted something I wrote, my default takeaway is “oh, I could have been more clear” or “huh, maybe I need to add something that was missing” — not “yikes, I shouldn’t have written this”. *
I read Max’s comment as “I thought this wasn’t written very clearly/got some things wrong”, not “I think you shouldn’t have written this at all”. The latter is, to me, almost the definition of a strong downvote.
If someone sees a post they think (a) points to important issues, and (b) gets important things wrong, any of upvote/downvote/decline-to-vote seems reasonable to me.
*This is partly because I’ve stopped feeling very nervous about Forum posts after years of experience. I know plenty of people who do have the “yikes” reaction. But that’s where the users’ identities and relationship comes into play — I’d feel somewhat differently had Max said the same thing to a new poster.
I don’t share your view about what a downvote means. However, regardless of what I think, it doesn’t actually have any fixed meaning beyond that which people a assign to it—so it’d be interesting to have some stats on how people on the forum interpret it.
Most(?) readers won’t know who either of them is, not to mention their relationship.
What does a downvote mean to you? If it means “you shouldn’t have written this”, what does a strong downvote mean to you? The same thing, but with more emphasis?
Why not create a poll? I would, but I’m not sure exactly which question you’d want asked.
Which brings up another question — to what extent should a comment be written for an author vs. the audience?
Max’s comment seemed very directed at Luke — it was mostly about the style of Luke’s writing and his way of drawing conclusions. Other comments feel more audience-directed.
Personally, I primarily downvote posts/comments where I generally think “reading this post/comment will on average make forum readers be worse at thinking about this problem than if they didn’t read this post/comment, assuming that the time spent reading this post/comment is free.”
I basically never strong downvote posts unless it’s obvious spam or otherwise an extremely bad offender in the “worsens thinking” direction.
It’s been over a week so I guess I should answer even if I don’t have time for a longer reply.
I think so, but I’m not very confident.
I don’t think private conversations can exist on a public platform. If it’s not a DM, there’s always an audience, and in most contexts, I’d expect much of a comment’s impact to come from its effects on that audience.
The polls in that specific group look like they have a very small and probably unrepresentative sample size. Though I don’t we’ll be able to get a much larger one on such a question, I guess.
Nice to see you on the Forum again!
Thanks for sharing that perspective—that makes sense. Possibly I was holding this to too high a standard—I think that I held it to a higher standard partly because Luke is also an organization/community leader, and probably I shouldn’t have taken that into account. Still, overall my best guess is that this post distracted from the conversation, rather than adding to it (though others clearly disagree). Roughly, I think that the data points/perspectives were important but not particularly novel, and that the conflation of different questions could lead to people coming away more confused, or to making inaccurate inferences. But I agree that this is a pretty high standard, and maybe I should just comment in circumstances like this.
I also think I should have been more careful re seeming to discourage suggestions about EA. I wanted to signal “this particular set of suggestions seems muddled” not “suggestions are bad”, but I definitely see how my post above could make people feel more hesitant to share suggestions, and that seems like a mistake on my part. To be clear: I would love feedback and suggestions!
Thanks Max. I agree that there is a lot of ground covered here that isn’t broken up into different dimensions and that it could have been better if broken up as such. I disagree that entirely undermines the core proposition that: (a) whether we like it or not we are getting more attention; (b) it’s particularly important to think carefully about our “shop fronts” with that increased attention; and therefore (c) staying true to “EA as a question” instead of a particular set of conclusions is going to ultimately serve our goals better (this might be our biggest disagreement?).
I’d be very interested to hear you unpack that you think the opposite of “easier to get big with the ‘evidence and reasoning’ framing”. This seems to be a pretty important crux.
Ah, I think I was actually a bit confused what the core proposition was, because of the different dimensions.
Here’s what I think of your claims:
a) 100% agree, this is a very important consideration.
b) Agree that this is important. I think it’s also very important to make sure that our shop fronts are accurate, and that we don’t importantly distort the real work that we’re doing (I expect you agree with this?).
c) I agree with this! Or at least, that’s what I’m focused on and want more of. (And I’m also excited about people doing more cause-specific or community building to complement that/reach different audiences.)
So maybe I agree with your core thesis!
How easy is it to get big with evidence and reasoning?
I want to distinguish a few different worlds:
We just do cause specific community building, or action-specific community building.
We do community building focused on “EA as a question” with several different causes. Our epistemics are decent but not amazing.
We do community building focused on “EA as a question” with several different causes. We are aiming for the epistemics of core members to be world class (like probably better than the average on this Forum, around the level that I see at some core EA organizations).
I’m most excited about option 3. I think that the thing we’re trying to do is really hard and it would be easy for us to cause harm if we don’t think carefully enough.
And then I think that we’re kind of just about at the level I’d like to see for 3. As we grow, I naturally expect regression to the mean, because we’re adding new people who have had less exposure to this type of thinking and may be less inclined to it. And also because I think that groups tend to reason less well as they get older and bigger. So I think that you want to be really careful about growth, and you can’t grow that quickly with this approach.
I wonder if you mean something a bit more like 2? I’m not excited about that, but I agree that we could grow it much more quickly.
I’m personally not doing 1, but I’m excited about others trying it. I think that, at least for some causes, if you’re doing 1 you can drop the epistemics/deep understanding requirements, and just have a lot of people coordinate around actions. E.g. I think that you could build a community of people who are earning to give for charities, and deferring to GiveWell and OpenPhilanthropy and GWWC about where they give. I think that this thing could grow at >200%/year. (This is the thing that I’m most excited about GWWC being.) Similarly, I think you could make a movement focused on ending global poverty based on evidence and reasoning that grows pretty quickly—e.g. around lobbying governments to spend more on aid, and spend aid money more effectively. (I think that this approach basically doesn’t work for pre-paradigmatic fields like AI safety, wild animal welfare, etc. though.)
Had a bit of time to digest overnight and wanted to clarify this a bit further.
I’m very supportive of #3 including “epistemics of core members to be world class”. But fear that trying to achieve #3 too narrowly (demographics, worldviews, engagement levels etc) might ultimately undermine our goals (putting more people off, leaving the core group without as much support, worldviews becoming too narrow and this hurts our epistemics, we don’t create enough allies to get things we want to do done).
I think that nurturing the experience through each level of engagement from outsider to audience through to contributor and core while remaining a “big tent” (worldview and action diverse) will ultimately serve us better than focusing too much on just developing a world class core (I think remaining a “big tent” is a necessary precondition because the world class core won’t exist without diversity of ideas/approaches and the support network needed for this core to succeed).
Happy to chat more about this.
Thanks for clarifying! Not much to add now right this moment other than to say that I appreciate you going into detail about this.
Hello Max,
In turn, I strongly downvoted your post.
Luke raised, you say, some “important issues”. However, you didn’t engage with the substance of those issues. Instead, you complained that he hadn’t adequately separated them even though, for my money, they are substantially related. I wouldn’t have minded that if you’d then go on to offer your thoughts on how EA should operate on each of the dimensions you listed, but you did not.
Given this, your comment struck me as unacceptably dismissive, particularly given you are the CEO of CEA. The message it conveys is something like “I will only listen to your concerns if you present them exactly in the format I want” which, again for my money, is not a good message to send.
I’m sorry that it came off as dismissive. I’ll edit to make clearer that I appreciate and value the datapoints and perspectives. I am keen to get feedback and suggestions in any form. I take the datapoints and perspectives that Luke shared seriously, and I’ve discussed lots of these things with him before. Sounds like you might want to share your perspective too? I’ll send you a DM.
I viewed the splitting out of different threads as a substantive contribution to the debate, but I’m sorry you didn’t see it that way. :) I agree that it would have been better if I’d given my take on all of the dimensions, but I didn’t really want to get into all of those threads right now.
Would you have this same reaction if you saw Luke and Max or GWWC/CEA as equals and peers? Maybe so! It seems like you saw this as the head of CEA talking down to the OP. Max and Luke seem to know each other though; I read Max’s comment as a quick flag between equals that there’s a disagreement here, but writing it on the forum instead of an email means the rest of us get to participate a bit more in the conversation too.
FWIW, I do think that I reacted to this a bit differently because it’s Luke (who I’ve worked with, and who I view as a peer). I think I would have been more positive/had lower standards for a random community member.
👌
Thank you for this post, I was thinking along similar lines and am grateful that you wrote this down. I would like to see the number of people grow that make decisions around career, donations and volunteering based on the central EA question regardless of whether they call themselves EA. More than a billion people live in high income countries alone and I find it conceivable that 1-10% would be open to making changes in their lives depending on the action they can take. But for EA to accommodate 10-100 million people I also assume different shopfronts in addition to the backend capabilities (having enough charities that can handle vast amounts of donations, having pipelines for charity entrepreneurship that can help these charities grow, consulting capacity to help existing organizations to switch to effectiveness metrics etc). If we look at the movement from the perspective of scaling to these numbers I assume we will see a relatively short term saturation in longtermist cause areas. Currently we don’t seem to be funding restricted in that area and I don’t see a world where millions working on these problems will be better than thousands. So from this perspective I would like us to think about longer view and build the capacity now for a big EA movement that will be less effective on the margin while advocating for the most effective choices now in parallel.
I initially found myself nodding along with this post, but I then realised I didn’t really understand what point you were trying to make. Here are some things I think you argue for:
theoretically, EA could be either big tent or small tent
to the extent there is a meaningful distinction, it seems better in general for EA to aim to be big tent
Now is a particularly important time to aim for EA to be big tent
Here are some things that we could do help make EA more big tent.
Am I right in thinking these are the core arguments?
A more important concern of mine with this post is that I don’t really see any evidence or arguments presented for any of these four things. I think your writing style is nice, but I’m not sure why (apart from something to do with social norms or deference) community builders should update their views in the directions you’re advocating for?
I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.
I think we can use the EA/Rationality divide to form a home for the philosophy-oriented people in Rationality that doesn’t dominate EA culture. Rationality used to totally dominate EA, something that has I think become less true over time, even if it’s still pretty prevalent at current levels. Having separate rationality events that people know about, while still ensuring that people devoted to EA have strong rationalist fundamentals (which is a big concern!), seems like the way to go for creating a thriving community.
Thanks for writing this Luke! Much like others have said, there are some sections in this that really resonate me and others I’m not so sure on. In particular I would offer a different framing on this point:
Rather than celebrating actions that have altruistic intent but questionable efficacy, instead I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy feelings, and there’s nothing wrong with wanting to feel those feelings and taking actions to achieve them, even if they might not be obviously maximally impactful. Impact is a marathon, not a sprint, and it’s important that people who are looking to have a large impact make sustainable choices, including keeping their morale high. For example, for people working on causes like AI safety where it’s difficult to see tangible impact, if donating blood gives you the boost you need to keep you feeling good about yourself and what you are doing with your life and therefor prevents you from becoming disillusioned with your choices and contributing less to AI safety, then I think that makes it very worth doing—however I think that is more an act of self-care rather than something that ought to be celebrated in the community (although perhaps acts of self-care ought to be more celebrated in the community).
I also think that a lot of average day-to-day charity (and perhaps other kinds of altruism) is primarily motivated by guilt, which I don’t think is particularly helpful for donors and I’d be surprised if it proved to be sustainable for charities either. I think effective altruism does a great job of reframing this: when I donate to GiveWell MIF, instead of doing it to assuage a sense of guilt, I do it because it lets me feel good about myself, knowing that I am actually making a tangible difference in the world with my actions. These are the same warm fuzzy feelings as from before, and I think perhaps that’s the framing I would prefer here: humans are warm-fuzzy-feeling-optimisers, and EA could do a better job at empowering people to feel those feelings when they make maximally impactful choices, rather than just ones where their impact is immediately obvious or provides some social kudos.
“I think we could be more accepting of the idea that some of these things (eg donating blood) make us feel warm fuzzy feelings, and there’s nothing wrong with wanting to feel those feelings and taking actions to achieve them, even if they might not be obviously maximally impactful. Impact is a marathon, not a sprint, and it’s important that people who are looking to have a large impact make sustainable choices, including keeping their morale high.”
Strongly agreed.
I think you may be underestimating the value of giving blood. It seems like according to the analysis here:
https://forum.effectivealtruism.org/posts/jqCCM3NvrtCYK3uaB/blood-donation-generally-not-that-effective-on-the-margin
A blood donation is still worth about 1⁄200 of a QALY. That’s still altruistic; it isn’t just warm fuzzies. If someone does not believe the EA community’s analyses of the top charities, we should still encourage them to do things like give blood.
Most of the value of giving blood is in fuzzies. You can buy a QALY from AMF for around $100, so that’s $0.50, less than 0.1x US minimum wage if blood donation takes an hour.
If someone doesn’t believe the valuation of a QALY it still feels wrong to encourage them to give blood for non-fuzzies reasons. I would encourage them to maximize their utility function, and I don’t know what action does that without more context—it might be thinking more about EA, donating to wildlife conservation, or doing any number of things with an altruistic theme.
Thanks for pointing that out, I didn’t realise how effective blood donation was. I think my original point still stands, if “donating blood” is substituted with a different proxy for something that is sub-maximally effective but feels good though.
Also, almost everything anyone does is sub-maximally effective. We simply do not know what maximally effective is. We do think it’s worth trying to figure out our best guesses using the best tools available but we can never know with 100% certainty.
Yeah, I actually called this point out in general in my #8 footnote (“Plus some of these things could (low confidence) make a decent case for considering how low cost they might be.”). I’ve been at EA events or in social contexts with EAs when someone has asserted with great confidence that things like voting and giving blood are pointless. This hasn’t been well received by onlookers (for good reason IMHO) and I think it does more harm than good.
Thanks for this post! Just pointing out that the links in footnotes 3 and 4 seem to all be not working
Edit: They were working, just had to do a captcha
They currently work for me.
Thanks for the post. I agree with most of it.
I think on the one hand, someone participating by donations only may still be huge, as we all know what direct impact GiveWell charities can have for relatively small amounts of money. Human lives saved are not to be taken lightly.
On the other hand, I think it’s important to deemphasize donations as a basis for the movement. If we seek to cause greater impact through non-marginal change, relying on philanthropy can only be a first step.
Lastly, I don’t think Elon Musk is someone we should associate ourselves with, since about yesterday.