What products and clusters of ideas work as ‘stepping stones’ or ‘gateways’ toward (full-blown) EA [or similarly ‘impactful’ mindsets]?
By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or …), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | Anow encounters X)?
I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
It tentatively seems to me that I’m unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, I’m more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly ‘taking the goal of effectively improving the world seriously by making a sincere effort to improve on median do-gooding by applying evidence-based reasoning, and delivering results that are impressive and epistemically useful to someone previously only exposed to median do-gooding’ - or at least conveying information, or embodying a style of reasoning about the world, that is important for such endeavours.
To give a random example, I might be fairly excited about assigning Steven Pinker’s The Better Angels of Our Nature as reading in high school, even though I think some claims in this book are false and that there are important omissions.
To give a different example (and one we have discussed before), I’m fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically don’t care about them in this context.
Yet another example: I’m fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
(Though in some other ways I might be more pessimistic / might have a higher bar for such content. E.g., I might care more about content being well written and engaging.)
I think my underlying inside-view model here is roughly:
One of the highest-leverage effects we can have on people is to ‘dislodge’ them from a state of complacency or fatalism about their ability to make a big positive contribution to the world.
To achieve this effect, it is often sufficient to expose people to examples of other people seriously trying to make a big positive contribution to the world while being guided by roughly the ‘right’ methods (e.g. scientific mindset), and are doing so in a way that seems impressive to the person exposed.
It is helpful if these efforts are ‘successful’ by generic lights, e.g., produce well-received output.
It doesn’t matter that much if a ‘core EA’ would think that, all things considered, these efforts are worthless or net negative because they’re in the wrong cause area or miss some crucial consideration or whatever.
How can we structure the EA community in such a way that it can ‘absorb’ very large numbers of people while also improving the allocation of talent or other resources?
I am personally quite unsatisfied with many discussions and standard arguments around “how much should EA grow?” etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be “grow more slowly” or “have a community of uniformly extremely high capability levels” but instead: “structure the community in such a way that selection/screening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.”.
ETA: Upon rereading, I worry that the above can be construed as being too indiscriminately negative about discussions on and efforts in EA community building. I think I’m mainly reporting my immediate reaction to a diffuse “vibe” I get from some conversations I remember, not to specific current efforts by people thinking and working on community building strategy full-time (I think often I simply don’t have a great understanding of these people’s views).
I find it instructive to compare the EA community to pure maths academia, and to large political parties.
Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general population. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs who’ll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or … And while perhaps in a sense they put “too little” weight on this, I also think they don’t need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isn’t for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps ‘mediocre’) books on advanced mathematics: “In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr’s collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]”
I’m not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a ‘wide net’ that exposes very large numbers of people to mathematics and an opportunity to ‘rise through its ranks’?
Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an ‘elitist’ approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
So it seems that even though these other ‘communities’ also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly “elitist”. Why is that? Maybe there are differences between these communities that mean their approaches can’t work for EA.
E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too ‘preparadigmatic’ to allow for something like that.
Perhaps the key difference for political parties is that they have higher demand for ‘non-elite’ talent—e.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize one’s views to large fractions of the general population.
But is that it? And is it all? I’m worried that we gave up too early, and that if we tried harder we’d find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesn’t seem great anyway) within the community, despite these structural challenges.
How large are the returns on expected lifetime impact as we move someone from “hasn’t heard of EA at all” toward “is maximally dedicated and believes all kinds of highly specific EA claims including about, e.g., top cause areas or career priority paths”?
E.g., very crudely, suppose I can either cause N people to move from ‘1% EA’ to ’10% EA’ or 1 person from ‘50% EA’ to ‘80% EA’. For which value of N should I be indifferent?
This is of course oversimplified—there isn’t a single dimension of EA-ness. I still feel that questions roughly like this one come up relatively often.
A related question roughly is: if we can only transmit, say, 10% of the ‘full EA package’, what are the most valuable 10%? A pitch for AMF? The Astronomical Waste argument? Basic reasons for why to care about effectiveness when doing good? The basic case for worrying about AI risk? Etc.
Note that it could turn out that moving people ‘too far’ can be bad—e.g., if common EA claims about top cause areas or careers were wrong, and we were transmitting only current ‘answers’ to people without giving them the ability to update these answers when it would be appropriate.
Should we fund people or projects? I.e., to what extent should we provide funding that is ‘restricted’ to specific projects or plans versus, at least for some people, give them funding to do whatever they want? If the latter, what are the criteria for identifying people for whom this is viable?
This is of course a spectrum, and the literal extreme of “give A money to do whatever they want” will very rarely seem correct to me.
It seems to me that I’m more willing than some others to move more toward ‘funding people’, and that when evaluating both people and projects I care less about current “EA alignment” and less about the direct, immediate impact—and more about things like “will providing funding to do X cause the grantee to engage with interesting ideas and make valuable learning experiences”.
How can we move both individual grantees as well as the community as a whole more toward an ‘abundance mindset’ as opposed to ‘scarcity mindset’?
This is a pretty complex topic, and an area that is delicate to navigate. As EA has perhaps witnessed in the past, naive ways of trying to encourage an “abundance mindset” can lead to a mismatch of expectations (e.g., people expecting that the bar for getting funding is lower than it in fact is), negative effects from poorly implemented or badly coordinated new projects, etc. - I also think there are other reasons for caution against ‘being too cavalier with money’, e.g., it can lead to a lack of accountability.
Nevertheless, I think it would be good if more EAs internalized just how much total funding/capital there would be available if only we could find robustly good ways to deploy it at large scale. I don’t have a great solution, and my thoughts on the general issue are in flux, but, e.g., I personally tentatively think that on the margin we should be more willing to provide larger upfront amounts of funding to people who seem highly capable and want to start ambitious projects.
I think traditional “nonprofit culture” unfortunately is extremely unhelpful here b/c it encourages risk aversion, excessive weight on saving money, etc. - Similarly, it is probably not helpful that a lot of EAs happen to be students or are otherwise have mostly experienced money being a relatively scarce resource in their personal lives.
Your points about “How can we structure the EA community in such a way that it can ‘absorb’ very large numbers of people while also improving the allocation of talent or other resources?” are perhaps particularly thought-provoking for me. I think I find your points less convincing/substantive than you do, but I hadn’t thought about them before and I think they do warrant more thought/discussion/research.
On this, readers may find the value of movement growth entry/tag interesting. (I’ve also made a suggestion on the Discussion page for a future editor to try to incorporate parts from your comment into that entry.)
Here are some quick gestures at the reasons why I think I’m less convinced by your points than you. But I don’t actually know my overall stance on how quickly, how large, and simply how the EA movement should grow. And I expect you’ve considered things like this already—this is maybe more for the readers benefit, or something.
As you say, “perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too ‘preparadigmatic’ to allow for something like that.”
I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
E.g., if we threw 2000 additional randomly chosen people into an EA conference, it would no longer make sense for me to spend lots of time having indiscriminate 1-1 chats where I give career advice (currently I spend a fair amount of time doing things like this, which reduces how much time I have for other useful things). I’d either have to stop doing that or find some way of “screening” people for it, which could impose costs and awkwardness on both parties
Currently we have the option of either growing more, faster, or differently in future, or not doing so. But certain growth strategies/outcomes would be hard-to-reverse, which would destroy option value
You say “I’m worried that we gave up too early”, but I don’t think we’ve come to a final stance on how, how fast, and how large the movement should grow, we’re just not now pushing for certain types or speeds of growth
We can push for it later
(Of course, there are also various costs to delaying our growth)
I mean I’m not sure how convinced I am by my points either. :) I think I mainly have a reaction of “some discussions I’ve seen seem kind of off, rely on flawed assumptions or false dichotomies, etc.”—but even if that’s right, I feel way less sure what the best conclusion is.
One quick reply:
I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
I think the “particularly if it grows in certain ways” is the key part here, and that basically we should talk 90% about how to grow and 10% about how much to grow.
I think one of my complaints is precisely that some discussions seem to construe suggestions of growing faster, or aiming for a larger community, as implying “adding 2,000 random people to EAG”. But to me this seems to be a bizarre strawman. If you add 2,000 random people to a maths conference, or drop them into a maths lecture, it will be a disaster as well!
I think the key question is not “what if we make everything we have bigger?” but “can we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?”.
A somewhat grandiose analogy: Suppose that at the dawn of the agricultural revolution you’re a central planner tasked with maximizing the human population. You realize that by introducing agriculture, much larger populations could be supported as far as the food supply goes. But then you realize that if you imagine larger population densities and group sizes while leaving everything else fixed, various things will break—e.g., kinship-based conflict resolution mechanisms will become infeasible. What should you do? You shouldn’t conclude that, unfortunately, the population can’t grow. You should think about division of labor, institutions, laws, taxes, cities, and the state.
FWIW, I used “if we threw 2000 additional randomly chosen people into an EA conference” as an example precisely because it’s particularly easy to explain/see the issue in that case. I agree that many other cases wouldn’t just be clearly problematic, and thus I avoided them when wanting a quick example. And I can now see how that example therefore seems straw-man-y.)
Minor point: I think it may have been slightly better to make a separate comment for each of your top-level bullet points, since they are each fairly distinct, fairly substantive, and could warrant specific replies.
[The following comment is a tangent/nit-pick, and doesn’t detract from your actual overall point.]
Yet another example: I’m fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
I agree that that sort of content seems useful, and also that “for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now)”. But I think the “even though” doesn’t quite make sense: I think part of the target audience for at least the Tomasik article was probably also people who might use their donations or careers to reduce animal suffering. And that’s more plausibly the best way for them to help farmed animals now, and such people would also benefit from analyses of the contributions of different animal products to animal suffering.
(But I’d guess that that would be less true for Galef’s article, due to having a less targeted audience. That said, I haven’t actually read either of these specific articles.)
To give a different example (and one we have discussed before), I’m fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically don’t care about them in this context.
Personally, I currently feel unsure whether it’d be very positive, somewhat positive, neutral, or somewhat negative for people to be exposed to that piece or pieces like it. But I think this just pushes in favour of your overall point that “What products and cluster of ideas work as ‘stepping stones’ or ‘gateways’ toward (full-blown) EA [or similarly ‘impactful’ mindsets]?” is a key uncertainty and that more clarity on that would be useful.
(I should also note than in general I think Kelsey’s work has remarkably good quality, especially considering the pace she’s producing things at, and I’m very glad she’s doing the work she’s doing.)
Some key uncertainties for me are:
What products and clusters of ideas work as ‘stepping stones’ or ‘gateways’ toward (full-blown) EA [or similarly ‘impactful’ mindsets]?
By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or …), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | A now encounters X)?
I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
It tentatively seems to me that I’m unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, I’m more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly ‘taking the goal of effectively improving the world seriously by making a sincere effort to improve on median do-gooding by applying evidence-based reasoning, and delivering results that are impressive and epistemically useful to someone previously only exposed to median do-gooding’ - or at least conveying information, or embodying a style of reasoning about the world, that is important for such endeavours.
To give a random example, I might be fairly excited about assigning Steven Pinker’s The Better Angels of Our Nature as reading in high school, even though I think some claims in this book are false and that there are important omissions.
To give a different example (and one we have discussed before), I’m fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically don’t care about them in this context.
Yet another example: I’m fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
(Though in some other ways I might be more pessimistic / might have a higher bar for such content. E.g., I might care more about content being well written and engaging.)
I think my underlying inside-view model here is roughly:
One of the highest-leverage effects we can have on people is to ‘dislodge’ them from a state of complacency or fatalism about their ability to make a big positive contribution to the world.
To achieve this effect, it is often sufficient to expose people to examples of other people seriously trying to make a big positive contribution to the world while being guided by roughly the ‘right’ methods (e.g. scientific mindset), and are doing so in a way that seems impressive to the person exposed.
It is helpful if these efforts are ‘successful’ by generic lights, e.g., produce well-received output.
It doesn’t matter that much if a ‘core EA’ would think that, all things considered, these efforts are worthless or net negative because they’re in the wrong cause area or miss some crucial consideration or whatever.
How can we structure the EA community in such a way that it can ‘absorb’ very large numbers of people while also improving the allocation of talent or other resources?
I am personally quite unsatisfied with many discussions and standard arguments around “how much should EA grow?” etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be “grow more slowly” or “have a community of uniformly extremely high capability levels” but instead: “structure the community in such a way that selection/screening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.”.
ETA: Upon rereading, I worry that the above can be construed as being too indiscriminately negative about discussions on and efforts in EA community building. I think I’m mainly reporting my immediate reaction to a diffuse “vibe” I get from some conversations I remember, not to specific current efforts by people thinking and working on community building strategy full-time (I think often I simply don’t have a great understanding of these people’s views).
I find it instructive to compare the EA community to pure maths academia, and to large political parties.
Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general population. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs who’ll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or … And while perhaps in a sense they put “too little” weight on this, I also think they don’t need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isn’t for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps ‘mediocre’) books on advanced mathematics: “In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr’s collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]”
I’m not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a ‘wide net’ that exposes very large numbers of people to mathematics and an opportunity to ‘rise through its ranks’?
Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an ‘elitist’ approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
So it seems that even though these other ‘communities’ also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly “elitist”. Why is that? Maybe there are differences between these communities that mean their approaches can’t work for EA.
E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too ‘preparadigmatic’ to allow for something like that.
Perhaps the key difference for political parties is that they have higher demand for ‘non-elite’ talent—e.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize one’s views to large fractions of the general population.
But is that it? And is it all? I’m worried that we gave up too early, and that if we tried harder we’d find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesn’t seem great anyway) within the community, despite these structural challenges.
How large are the returns on expected lifetime impact as we move someone from “hasn’t heard of EA at all” toward “is maximally dedicated and believes all kinds of highly specific EA claims including about, e.g., top cause areas or career priority paths”?
E.g., very crudely, suppose I can either cause N people to move from ‘1% EA’ to ’10% EA’ or 1 person from ‘50% EA’ to ‘80% EA’. For which value of N should I be indifferent?
This is of course oversimplified—there isn’t a single dimension of EA-ness. I still feel that questions roughly like this one come up relatively often.
A related question roughly is: if we can only transmit, say, 10% of the ‘full EA package’, what are the most valuable 10%? A pitch for AMF? The Astronomical Waste argument? Basic reasons for why to care about effectiveness when doing good? The basic case for worrying about AI risk? Etc.
Note that it could turn out that moving people ‘too far’ can be bad—e.g., if common EA claims about top cause areas or careers were wrong, and we were transmitting only current ‘answers’ to people without giving them the ability to update these answers when it would be appropriate.
Should we fund people or projects? I.e., to what extent should we provide funding that is ‘restricted’ to specific projects or plans versus, at least for some people, give them funding to do whatever they want? If the latter, what are the criteria for identifying people for whom this is viable?
This is of course a spectrum, and the literal extreme of “give A money to do whatever they want” will very rarely seem correct to me.
It seems to me that I’m more willing than some others to move more toward ‘funding people’, and that when evaluating both people and projects I care less about current “EA alignment” and less about the direct, immediate impact—and more about things like “will providing funding to do X cause the grantee to engage with interesting ideas and make valuable learning experiences”.
How can we move both individual grantees as well as the community as a whole more toward an ‘abundance mindset’ as opposed to ‘scarcity mindset’?
This is a pretty complex topic, and an area that is delicate to navigate. As EA has perhaps witnessed in the past, naive ways of trying to encourage an “abundance mindset” can lead to a mismatch of expectations (e.g., people expecting that the bar for getting funding is lower than it in fact is), negative effects from poorly implemented or badly coordinated new projects, etc. - I also think there are other reasons for caution against ‘being too cavalier with money’, e.g., it can lead to a lack of accountability.
Nevertheless, I think it would be good if more EAs internalized just how much total funding/capital there would be available if only we could find robustly good ways to deploy it at large scale. I don’t have a great solution, and my thoughts on the general issue are in flux, but, e.g., I personally tentatively think that on the margin we should be more willing to provide larger upfront amounts of funding to people who seem highly capable and want to start ambitious projects.
I think traditional “nonprofit culture” unfortunately is extremely unhelpful here b/c it encourages risk aversion, excessive weight on saving money, etc. - Similarly, it is probably not helpful that a lot of EAs happen to be students or are otherwise have mostly experienced money being a relatively scarce resource in their personal lives.
Your points about “How can we structure the EA community in such a way that it can ‘absorb’ very large numbers of people while also improving the allocation of talent or other resources?” are perhaps particularly thought-provoking for me. I think I find your points less convincing/substantive than you do, but I hadn’t thought about them before and I think they do warrant more thought/discussion/research.
On this, readers may find the value of movement growth entry/tag interesting. (I’ve also made a suggestion on the Discussion page for a future editor to try to incorporate parts from your comment into that entry.)
Here are some quick gestures at the reasons why I think I’m less convinced by your points than you. But I don’t actually know my overall stance on how quickly, how large, and simply how the EA movement should grow. And I expect you’ve considered things like this already—this is maybe more for the readers benefit, or something.
As you say, “perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too ‘preparadigmatic’ to allow for something like that.”
I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
E.g., if we threw 2000 additional randomly chosen people into an EA conference, it would no longer make sense for me to spend lots of time having indiscriminate 1-1 chats where I give career advice (currently I spend a fair amount of time doing things like this, which reduces how much time I have for other useful things). I’d either have to stop doing that or find some way of “screening” people for it, which could impose costs and awkwardness on both parties
Currently we have the option of either growing more, faster, or differently in future, or not doing so. But certain growth strategies/outcomes would be hard-to-reverse, which would destroy option value
You say “I’m worried that we gave up too early”, but I don’t think we’ve come to a final stance on how, how fast, and how large the movement should grow, we’re just not now pushing for certain types or speeds of growth
We can push for it later
(Of course, there are also various costs to delaying our growth)
I mean I’m not sure how convinced I am by my points either. :) I think I mainly have a reaction of “some discussions I’ve seen seem kind of off, rely on flawed assumptions or false dichotomies, etc.”—but even if that’s right, I feel way less sure what the best conclusion is.
One quick reply:
I think the “particularly if it grows in certain ways” is the key part here, and that basically we should talk 90% about how to grow and 10% about how much to grow.
I think one of my complaints is precisely that some discussions seem to construe suggestions of growing faster, or aiming for a larger community, as implying “adding 2,000 random people to EAG”. But to me this seems to be a bizarre strawman. If you add 2,000 random people to a maths conference, or drop them into a maths lecture, it will be a disaster as well!
I think the key question is not “what if we make everything we have bigger?” but “can we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?”.
A somewhat grandiose analogy: Suppose that at the dawn of the agricultural revolution you’re a central planner tasked with maximizing the human population. You realize that by introducing agriculture, much larger populations could be supported as far as the food supply goes. But then you realize that if you imagine larger population densities and group sizes while leaving everything else fixed, various things will break—e.g., kinship-based conflict resolution mechanisms will become infeasible. What should you do? You shouldn’t conclude that, unfortunately, the population can’t grow. You should think about division of labor, institutions, laws, taxes, cities, and the state.
(Yeah, this seems reasonable.
FWIW, I used “if we threw 2000 additional randomly chosen people into an EA conference” as an example precisely because it’s particularly easy to explain/see the issue in that case. I agree that many other cases wouldn’t just be clearly problematic, and thus I avoided them when wanting a quick example. And I can now see how that example therefore seems straw-man-y.)
Interesting discussion. What if there was a separate brand for a mass movement version of EA?
Thanks! This is really interesting.
Minor point: I think it may have been slightly better to make a separate comment for each of your top-level bullet points, since they are each fairly distinct, fairly substantive, and could warrant specific replies.
[The following comment is a tangent/nit-pick, and doesn’t detract from your actual overall point.]
I agree that that sort of content seems useful, and also that “for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now)”. But I think the “even though” doesn’t quite make sense: I think part of the target audience for at least the Tomasik article was probably also people who might use their donations or careers to reduce animal suffering. And that’s more plausibly the best way for them to help farmed animals now, and such people would also benefit from analyses of the contributions of different animal products to animal suffering.
(But I’d guess that that would be less true for Galef’s article, due to having a less targeted audience. That said, I haven’t actually read either of these specific articles.)
(Ah yeah, good point. I agree that the “even though” is a bit off because of the things you say.)
In case any readers are interested, they can see my thoughts on that piece here: Quick thoughts on Kelsey Piper’s article “Is climate change an “existential threat” — or just a catastrophic one?”
Personally, I currently feel unsure whether it’d be very positive, somewhat positive, neutral, or somewhat negative for people to be exposed to that piece or pieces like it. But I think this just pushes in favour of your overall point that “What products and cluster of ideas work as ‘stepping stones’ or ‘gateways’ toward (full-blown) EA [or similarly ‘impactful’ mindsets]?” is a key uncertainty and that more clarity on that would be useful.
(I should also note than in general I think Kelsey’s work has remarkably good quality, especially considering the pace she’s producing things at, and I’m very glad she’s doing the work she’s doing.)