There’s value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights—it’s easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don’t address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.
This is separate from any concern about whether it’s better for EA to be a large or small movement.
Be extra vigilant to ensure that effective altruism remains a “big tent”.
Edit: big tent actually means “encompassing a broad spectrum of views”, not “big movement”. I now think this section has some relevance to the OP but does not centrally address the above point.
As I understand it, this means spending more resources on people who are “less elite” and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to “big tent” efforts, it would mean less field-building at top-20 universities (Cambridge AGISF), less highly scalable top-funnel (80,000 Hours), less workshops for people who are committed to career changes and get huge speedups from workshops.
One could still make a neglectedness case for big-tent efforts, but the cost-benefit calculation definitely can’t be summed up in one line.
Celebrate all the good actions[6] that people are taking (not diminish people when they don’t go from 0 to 100 in under 10 seconds flat).
I’m uncomfortable doing too much celebrating of actions that are much lower impact than other actions (e.g. donating blood), from both an honesty/transparency perspective and a consequentialist perspective. From a consequentialist perspective, we should probably celebrate actions that create a lot of expected impact in order to encourage people to take those actions. So the relevant question is whether donating blood makes one closer to having a very high-impact career. I think the answer is often no: it often doesn’t practice careful scope-sensitive thinking, or bring high-impact actions into one’s action space.
From a transparency perspective, celebration disproportionate to the good done also feels kind of fake. In the extreme, we’re basically distorting our impressions of people’s actions to get people to join a movement. I’m not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren’t very good and don’t deserve celebration, and (b) some actions are good but only because they’re on the path to an impactful career.
Communicate our ideas both in high fidelity while remaining brief and to the point (be careful of the memes we spread).
Communication is hard. There’s a tradeoff between fidelity, brevity, scale, and speed (time spent writing/editing/talking to distill 1 idea):
Long one-on-ones get very high fidelity, low brevity, low scale, and high speed
80k podcasts are high fidelity, low brevity, high scale, and low speed
A tabling pitch is low fidelity, high brevity, moderate scale, and moderate speed
A short, polished EA forum post is moderate fidelity, high brevity, high scale, and very low speed. If you’re not a gifted writer it takes multiple editing cycles to create a really high-quality post. Usually this includes copy-editing, sending the Google Doc draft to friends, having discussions in the comments, maybe adding visuals.
If we max out fidelity and brevity, we have to have lower scale and/or speed. I think this is okay if we’re targeting communication, but it doesn’t play well with the big-tent approach where we also need high scale. One could say we should just get closer to the Pareto frontier, but I think everyone is already trying to do this.
Avoid coming across as dogmatic, elitist, or out-of-touch.
I don’t strongly disagree with this—it’s bad to put off people unnecessarily—but I think it can easily be taken too far.
I’m worried that people will avoid looking dogmatic by adding unwarranted uncertainty about what actions are best, and in particular being unwilling to reject popular ideas. I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty. (This is related to the post “PR is corrosive; “reputation” is not.) When someone asks whether volunteering in an animal shelter is high-impact, we should give well-reasoned arguments that there are probably higher-value things to do under almost every scope-sensitive moral view (perhaps starting from first principles if they’re new), not avoid looking dogmatic by telling them something largely false like “Some people might find higher impact at an animal shelter because they have comparative advantage / are much more motivated, and there could also be unknown unknowns that place really high value on the work at animal shelters”. It’s impossible to spend 1% of our resources on every idea with as much true merit as volunteering at animal shelters because there are more than 100 such ideas, so we only would because of bias towards popular things. But when we require a well-reasoned case using the ITN framework to allocate 1% of our effort to a problem, and therefore refuse to spend 1% of our effort on animal shelters, plastic bag bans, or the NYC homelessness problem, we will come off as dogmatic to some people. OP addresses the need to protect our epistemics at the end, but I think doesn’t stress this enough.
There are also many crucial EA things that sound or are elitist.
More resources are focused on top universities than community colleges (because talent is concentrated there and this ultimately helps the most sentient beings).
Over 80% of EA funding is from billionaires.
People are flown across the world to retreats (because this is often the most efficient way to network or learn, and we think their time can do more good than spending the money on anything else).
We are looking for people who produce 1000x the impact as others (because they have more multipliers available).
We shouldn’t be exclusionary for no reason when talking to new people. But based on community-building at two universities, ~10 retreats/EAGs, much of the reason EA looks elitist is not because we’re exclusionary for no reason, it’s because EAs do important things that look elitist.
Maybe the most elitist-sounding practices should even be slightly reduced for PR reasons. But going further to reduce the appearance of elitism would hamstring EA by taking away some of the most valuable direct and meta interventions.
Celebrate all the good actions[that people are taking (not diminish people when they don’t go from 0 to 100 in under 10 seconds flat).
--
I’m uncomfortable doing too much celebrating of actions that are much lower impact than other actions
I think the following things can both be true:
The best actions are much higher impact than others and should be heavily encouraged.
Most people will come in on easier but lower impact actions and if there isn’t an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if ‘entry level’ actions either aren’t available or receive a very low reward or status.
I didn’t read the OP as saying that we should settle with lower impact actions if there’s the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level—either helping them to reach higher impact over time if for whatever reason they’re unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that’s what’s available.
Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I’ve definitely noticed the ‘0-100’ thing and if I was younger and less experienced it might have bothered me more.
Thanks for your response. I tend to actually agree with a lot (but not all) of these points, so I totally own that some of this just needs clarification that wouldn’t be the case if I were clearer in my original post.
this means spending more resources on people who are “less elite” and less committed to EA
There’s a difference between actively recruiting from “less elite” sources and being carefully about your shopfronts so that they don’t put-off would-be effective altruists and create enemies of could-be allies. I’m pointing much more to the latter than the former (though I do think there’s value in the former too).
I’m not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren’t very good and don’t deserve celebration, and (b) some actions are good but only because they’re on the path to an impactful career.
I’m mostly saying we shouldn’t shun people for taking a suboptimal action. But also, be careful about how confident we are about what is suboptimal or not. And use to use positive reinforcement instead of good actions instead of guilting people for not reaching a particular standard. To recognise that we’re all on a journey and the destination isn’t always that clear anyway (Rob Wiblin thought it might not be a good idea for SBF to earn to give and I think that encouraging him to become a grantmaker at Open Philanthropy probably would have been a worse outcome).
Side note: There’s something pretty off-putting about treating the actions of altruistic people as purely a means to getting them into a particular predestined career. I think we lose good people when we treat them this way. We can seem like slimey salespeople.
Communication is hard. There’s a tradeoff between fidelity, brevity, scale, and speed
Again this is where you have different focuses in different places. Our shopfronts (e.g. effectivealtruism.org, fellowships, virtual programs, introductory presentations, personal interactions with community members and group leaders etc) start brief and concise with a clear path to dig deeper.
big-tent approach where we also need high scale
I think this is a central confusion with my post and I own I must not have communicated this well: big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.
someone asks whether volunteering in an animal shelter is “EA”, we should give well-reasoned arguments that there are probably higher-value things to do under almost every scope-sensitive moral view (perhaps starting from first principles if they’re new), not avoid looking dogmatic by telling them something largely false like “Some people might find higher impact at an animal shelter because they have comparative advantage / are much more motivated, and there could also be unknown unknowns that place really high value on the work at animal shelters”.
I agree! The former is a great response, the latter is not. I’d also say something along the lines of “you can have multiple goals and that’s fine” and that if the warm fuzzies is important and motivating for you then that’s great. I wouldn’t encourage someone to say it’s “EA” if it isn’t.
We should probably not come off as exclusionary when talking to new people.
Great! That’s one of my main points.
Taken to the extreme, avoiding the appearance of elitism would hamstring EA by taking away some of the most valuable direct and meta interventions.
I agree! I think we should just be judicious about it and bear in mind both (a) how perception of elitism can hurt us; and (b) when we miss out on great people because of unnecessary elitism that results in us achieving a lot less.
big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn’t realise were important because we were overconfident in some problems/solutions, then that’s quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I’ll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there’s some truth to that, but I think there’s lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are ‘most’ important and the ‘best’ approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here—is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.
On the question of whether resource diversion from talented people to less ‘talented’ people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I’d say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I’d say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni—it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we’d be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.
There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It’s not obvious to me why that would need to be sacrificed to have a bigger tent—but maybe we have different ideas of what a bigger tent looks like.
(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I’d bet you’d find the opposite point of view being genuinely argued for around this forum or LW somewhere ).
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
I mostly wanted to point out a few characteristics of applause lights that I thought matched:
the proposed actions are easier to cheer for on a superficial level
arguing for the opposite is difficult, even if it might be correct: “Avoid coming across as dogmatic, elitist, or out-of-touch.” inverts to “be okay with coming across as dogmatic, elitsit, or out-of-touch”
when you try to put them into practice, the easy changes you can make don’t address fundamental difficulties, and making sweeping changes has high cost
Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn’t mean to make.
Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ’is X or Y more impactful
I think this is unhelpfully conflating at least three pretty different concepts.
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
I guess my personal read here is that I don’t think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption.
Yeah I just couldn’t understand his comment until I realised that he’d misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn’t deter great people for having different views. So I was looking for an explanation and that’s what my brain came up with.
First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess.This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply[3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.
I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.
This is a great sentence, I will be stealing it :)
However, I think “having good legible epistemics” being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.
I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I’ve found anecdotally is that a sort of “friendly transparency” works pretty well for this—just be up front about what you believe and why, don’t try to hide ideas that might scare people off, be open about the optics on things, ways you’re worried they might come across badly, and why those bad impressions are misleading, etc.
There’s value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights—it’s easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don’t address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.
This is separate from any concern about whether it’s better for EA to be a large or small movement.
Edit: big tent actually means “encompassing a broad spectrum of views”, not “big movement”. I now think this section has some relevance to the OP but does not centrally address the above point.
As I understand it, this means spending more resources on people who are “less elite” and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to “big tent” efforts, it would mean less field-building at top-20 universities (Cambridge AGISF), less highly scalable top-funnel (80,000 Hours), less workshops for people who are committed to career changes and get huge speedups from workshops.
One could still make a neglectedness case for big-tent efforts, but the cost-benefit calculation definitely can’t be summed up in one line.
I’m uncomfortable doing too much celebrating of actions that are much lower impact than other actions (e.g. donating blood), from both an honesty/transparency perspective and a consequentialist perspective. From a consequentialist perspective, we should probably celebrate actions that create a lot of expected impact in order to encourage people to take those actions. So the relevant question is whether donating blood makes one closer to having a very high-impact career. I think the answer is often no: it often doesn’t practice careful scope-sensitive thinking, or bring high-impact actions into one’s action space.
From a transparency perspective, celebration disproportionate to the good done also feels kind of fake. In the extreme, we’re basically distorting our impressions of people’s actions to get people to join a movement. I’m not saying we should shun people for taking a suboptimal action, but we should be transparent about the fact that (a) some altruistic actions aren’t very good and don’t deserve celebration, and (b) some actions are good but only because they’re on the path to an impactful career.
Communication is hard. There’s a tradeoff between fidelity, brevity, scale, and speed (time spent writing/editing/talking to distill 1 idea):
Long one-on-ones get very high fidelity, low brevity, low scale, and high speed
80k podcasts are high fidelity, low brevity, high scale, and low speed
A tabling pitch is low fidelity, high brevity, moderate scale, and moderate speed
A short, polished EA forum post is moderate fidelity, high brevity, high scale, and very low speed. If you’re not a gifted writer it takes multiple editing cycles to create a really high-quality post. Usually this includes copy-editing, sending the Google Doc draft to friends, having discussions in the comments, maybe adding visuals.
If we max out fidelity and brevity, we have to have lower scale and/or speed. I think this is okay if we’re targeting communication, but it doesn’t play well with the big-tent approach where we also need high scale. One could say we should just get closer to the Pareto frontier, but I think everyone is already trying to do this.
I don’t strongly disagree with this—it’s bad to put off people unnecessarily—but I think it can easily be taken too far.
I’m worried that people will avoid looking dogmatic by adding unwarranted uncertainty about what actions are best, and in particular being unwilling to reject popular ideas. I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty. (This is related to the post “PR is corrosive; “reputation” is not.) When someone asks whether volunteering in an animal shelter is high-impact, we should give well-reasoned arguments that there are probably higher-value things to do under almost every scope-sensitive moral view (perhaps starting from first principles if they’re new), not avoid looking dogmatic by telling them something largely false like “Some people might find higher impact at an animal shelter because they have comparative advantage / are much more motivated, and there could also be unknown unknowns that place really high value on the work at animal shelters”. It’s impossible to spend 1% of our resources on every idea with as much true merit as volunteering at animal shelters because there are more than 100 such ideas, so we only would because of bias towards popular things. But when we require a well-reasoned case using the ITN framework to allocate 1% of our effort to a problem, and therefore refuse to spend 1% of our effort on animal shelters, plastic bag bans, or the NYC homelessness problem, we will come off as dogmatic to some people. OP addresses the need to protect our epistemics at the end, but I think doesn’t stress this enough.
There are also many crucial EA things that sound or are elitist.
More resources are focused on top universities than community colleges (because talent is concentrated there and this ultimately helps the most sentient beings).
Over 80% of EA funding is from billionaires.
People are flown across the world to retreats (because this is often the most efficient way to network or learn, and we think their time can do more good than spending the money on anything else).
We are looking for people who produce 1000x the impact as others (because they have more multipliers available).
We shouldn’t be exclusionary for no reason when talking to new people. But based on community-building at two universities, ~10 retreats/EAGs, much of the reason EA looks elitist is not because we’re exclusionary for no reason, it’s because EAs do important things that look elitist.
Maybe the most elitist-sounding practices should even be slightly reduced for PR reasons. But going further to reduce the appearance of elitism would hamstring EA by taking away some of the most valuable direct and meta interventions.
--
I think the following things can both be true:
The best actions are much higher impact than others and should be heavily encouraged.
Most people will come in on easier but lower impact actions and if there isn’t an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if ‘entry level’ actions either aren’t available or receive a very low reward or status.
I didn’t read the OP as saying that we should settle with lower impact actions if there’s the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level—either helping them to reach higher impact over time if for whatever reason they’re unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that’s what’s available.
Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I’ve definitely noticed the ‘0-100’ thing and if I was younger and less experienced it might have bothered me more.
Thanks Rob. I think you just made my point better than me! 😀
Thanks for your response. I tend to actually agree with a lot (but not all) of these points, so I totally own that some of this just needs clarification that wouldn’t be the case if I were clearer in my original post.
There’s a difference between actively recruiting from “less elite” sources and being carefully about your shopfronts so that they don’t put-off would-be effective altruists and create enemies of could-be allies. I’m pointing much more to the latter than the former (though I do think there’s value in the former too).
I’m mostly saying we shouldn’t shun people for taking a suboptimal action. But also, be careful about how confident we are about what is suboptimal or not. And use to use positive reinforcement instead of good actions instead of guilting people for not reaching a particular standard. To recognise that we’re all on a journey and the destination isn’t always that clear anyway (Rob Wiblin thought it might not be a good idea for SBF to earn to give and I think that encouraging him to become a grantmaker at Open Philanthropy probably would have been a worse outcome).
Side note: There’s something pretty off-putting about treating the actions of altruistic people as purely a means to getting them into a particular predestined career. I think we lose good people when we treat them this way. We can seem like slimey salespeople.
Again this is where you have different focuses in different places. Our shopfronts (e.g. effectivealtruism.org, fellowships, virtual programs, introductory presentations, personal interactions with community members and group leaders etc) start brief and concise with a clear path to dig deeper.
I think this is a central confusion with my post and I own I must not have communicated this well: big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.
I agree! The former is a great response, the latter is not. I’d also say something along the lines of “you can have multiple goals and that’s fine” and that if the warm fuzzies is important and motivating for you then that’s great. I wouldn’t encourage someone to say it’s “EA” if it isn’t.
Great! That’s one of my main points.
I agree! I think we should just be judicious about it and bear in mind both (a) how perception of elitism can hurt us; and (b) when we miss out on great people because of unnecessary elitism that results in us achieving a lot less.
Thanks, this clears up a lot for me.
Great! I definitely should have defined that up front!
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn’t realise were important because we were overconfident in some problems/solutions, then that’s quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I’ll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there’s some truth to that, but I think there’s lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are ‘most’ important and the ‘best’ approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here—is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.
On the question of whether resource diversion from talented people to less ‘talented’ people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I’d say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I’d say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni—it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we’d be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.
There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It’s not obvious to me why that would need to be sacrificed to have a bigger tent—but maybe we have different ideas of what a bigger tent looks like.
(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I’d bet you’d find the opposite point of view being genuinely argued for around this forum or LW somewhere ).
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
Thanks, I made an edit to weaken the wording.
I mostly wanted to point out a few characteristics of applause lights that I thought matched:
the proposed actions are easier to cheer for on a superficial level
arguing for the opposite is difficult, even if it might be correct: “Avoid coming across as dogmatic, elitist, or out-of-touch.” inverts to “be okay with coming across as dogmatic, elitsit, or out-of-touch”
when you try to put them into practice, the easy changes you can make don’t address fundamental difficulties, and making sweeping changes has high cost
Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn’t mean to make.
Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.
I think this is unhelpfully conflating at least three pretty different concepts.
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
I guess my personal read here is that I don’t think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption.
Yeah I just couldn’t understand his comment until I realised that he’d misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn’t deter great people for having different views. So I was looking for an explanation and that’s what my brain came up with.
Thank you, that makes sense!
First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply [3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
maybe this whole comment should be a reply to Luke’s reply but moving this comment is a tad annoying so hopefully it is forgivable to leave it here 🌞.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
🌞
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.
This is a great sentence, I will be stealing it :)
However, I think “having good legible epistemics” being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.
I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I’ve found anecdotally is that a sort of “friendly transparency” works pretty well for this—just be up front about what you believe and why, don’t try to hide ideas that might scare people off, be open about the optics on things, ways you’re worried they might come across badly, and why those bad impressions are misleading, etc.