First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
Correct me if I’m wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked ‘is X or Y more impactful’?
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess.This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply[3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.
First off, note that my comment was based on a misunderstanding of “big tent” as “big movement”, not “broad spectrum of views”.
As Linch pointed out, there are three different questions here (and there’s a 4th important one):
Whether impact can be collapsed to a single dimension when doing moral calculus.
Whether morality is objective
Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
Whether we can identify groups of people to invest in, given the uncertainty we have
Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.
I would state my main point as something like “Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they’re good, given that they have large costs”. I do believe that there’s a tail of talented+dedicated people who will make much more impact than others, but I don’t think the second half follows, just that any reallocation of resources requires weighing costs and benefits.
Here are some things I think we agree on:
Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
Before deciding that top community-builders should work at a synagogue, we should make sure it’s the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
We can identify certain groups of people who will pretty robustly have higher expected impact (again where “expected” takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it’s unclear where to put the marginal resource.
It is plausible to me that there are some low opportunity cost actions that might make it way more likely that certain people will work on guesses that are plausible candidates for our top (or close to the top) guesses in the next 50 years who, otherwise, wouldn’t engage with effective altruism.[1]
For example, how existing community organizers manage certain conversations can make a really big difference to some people’s lasting impressions of effective altruism.
Consider a person who comes to a group who is sceptical of the top causes we propose but uses the ITN framework to make a case for another cause that they believe is more promising by EA lights.
There are many ways to respond to this person. One is to make it clear that you think that this person just hasn’t thought about it enough, or they would just come to the same conclusion as existing people in the effective altruism community. Another is to give false encouragement, overstating the extent of your agreement for the sake of making this person, who you disagree with, feel welcome. A skilled community builder with the right mindset can, perhaps, navigate between the above two reactions. They might use this as an opportunity to really reinforce the EA mindset/thinking tools that this person is demonstrating (which is awesome!) and then give some pushback where pushback is due.[2]
There are also some higher opportunity cost actions to achieve this inclusivity, including the ones you discussed (but this doesn’t seem what Luke was advocating for, see his reply [3]).
This seems to get the benefit, if done successfully, of not only their work but having another person who might be able to communicate the core idea of effective altruism with high fidelity to many others they meet over their entire career with a sphere of people we might not otherwise reach.
Ideally, pushback is just on one part at a time. The shotgun method rarely leads to a constructive conversation and it’s hard to resolve all cruxes in a single conversation. The goal might be just to find one to resolve for now (maybe even a smaller one than the one that the conversation started out with) and hopefully they’ll enjoy the conversation enough to come back to another event to resolve a second.
I think it’s also worth acknowledging 1) that we have spent a decade steelmanning our views, and that sometimes it takes a lot of time to build up new ideas (see butterfly ideas), which won’t get built up if no one makes that investment but also 2) people have spent 10 years thinking hard about the “how to help others as much as possible” question so it is definitely worth some investment to get an understanding of why these people think there is a case for these existing causes.
maybe this whole comment should be a reply to Luke’s reply but moving this comment is a tad annoying so hopefully it is forgivable to leave it here 🌞.
Thanks Sophia! That example is very much the kind of thing I’m talking about. IMHO it’s pretty low cost and high value for us to try and communicate in this way (and would attract more people with a scout mindset which I think would be very good).
🌞
Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it’d help with interpreting it.
So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.
I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That’s my best guess at our disagreement—that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don’t do enough macro analysis, or I’m just not that maximising.
BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn’t get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I’m personally quite cautious to not confuse ‘EA’ with ‘having impact’ (not saying you did this, I’m just pretty wary about it and thus sensitive), and do worry about people selecting for ‘EA alignment’ - it really turns me off EA because it’s strong sign of groupthink and bad epistemic culture.