Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn’t realise were important because we were overconfident in some problems/solutions, then that’s quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I’ll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there’s some truth to that, but I think there’s lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are ‘most’ important and the ‘best’ approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here—is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.
On the question of whether resource diversion from talented people to less ‘talented’ people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I’d say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I’d say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni—it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we’d be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.
There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It’s not obvious to me why that would need to be sacrificed to have a bigger tent—but maybe we have different ideas of what a bigger tent looks like.
(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I’d bet you’d find the opposite point of view being genuinely argued for around this forum or LW somewhere ).
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
I mostly wanted to point out a few characteristics of applause lights that I thought matched:
the proposed actions are easier to cheer for on a superficial level
arguing for the opposite is difficult, even if it might be correct: “Avoid coming across as dogmatic, elitist, or out-of-touch.” inverts to “be okay with coming across as dogmatic, elitsit, or out-of-touch”
when you try to put them into practice, the easy changes you can make don’t address fundamental difficulties, and making sweeping changes has high cost
Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn’t mean to make.
Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ’is X or Y more impactful
I think this is unhelpfully conflating at least three pretty different concepts.
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
I guess my personal read here is that I don’t think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption.
Yeah I just couldn’t understand his comment until I realised that he’d misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn’t deter great people for having different views. So I was looking for an explanation and that’s what my brain came up with.
First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess.This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply[3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
I got this impression from what I understood your main point to be, something like:
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I think there are several assumptions in both of these points that I want to unpack (and disagree with).
On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn’t realise were important because we were overconfident in some problems/solutions, then that’s quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I’ll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there’s some truth to that, but I think there’s lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are ‘most’ important and the ‘best’ approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here—is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.
On the question of whether resource diversion from talented people to less ‘talented’ people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I’d say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I’d say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni—it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we’d be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.
There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It’s not obvious to me why that would need to be sacrificed to have a bigger tent—but maybe we have different ideas of what a bigger tent looks like.
(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I’d bet you’d find the opposite point of view being genuinely argued for around this forum or LW somewhere ).
(I also felt that the applause lights argument largely didn’t hold up and came across as unnecessarily dismissive, I think the comment would have held up better without it)
Thanks, I made an edit to weaken the wording.
I mostly wanted to point out a few characteristics of applause lights that I thought matched:
the proposed actions are easier to cheer for on a superficial level
arguing for the opposite is difficult, even if it might be correct: “Avoid coming across as dogmatic, elitist, or out-of-touch.” inverts to “be okay with coming across as dogmatic, elitsit, or out-of-touch”
when you try to put them into practice, the easy changes you can make don’t address fundamental difficulties, and making sweeping changes has high cost
Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn’t mean to make.
Thanks Thomas! I definitely agree that when you get into the details of some of these they’re certainly not easy and that the framing of some of them could be seen as applause lights.
I think this is unhelpfully conflating at least three pretty different concepts.
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.
I guess my personal read here is that I don’t think Thomas implied that we had perfect predictive prowess, nor did his argument rely upon this assumption.
Yeah I just couldn’t understand his comment until I realised that he’d misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn’t deter great people for having different views. So I was looking for an explanation and that’s what my brain came up with.
Thank you, that makes sense!
First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply [3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
maybe this whole comment should be a reply to Luke’s reply but moving this comment is a tad annoying so hopefully it is forgivable to leave it here 🌞.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
🌞
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.