Some related questions with slightly different framings:
What crucial considerations and/âor key uncertainties do you think the EAIF fund operates under?
What types/âlines of research do you expect would be particularly useful for informing the EAIFâs funding decisions?
Do you have thoughts on what types/âlines of research would be particularly useful for informing otherfundersâ funding decisions in the âEA infrastructureâ space?
Do you have thoughts on how the answers to questions 2 and 3 might differ?
What products and clusters of ideas work as âstepping stonesâ or âgatewaysâ toward (full-blown) EA [or similarly âimpactfulâ mindsets]?
By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or âŚ), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | Anow encounters X)?
I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
It tentatively seems to me that Iâm unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, Iâm more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly âtaking the goal of effectively improving the world seriously by making a sincere effort to improve on median do-gooding by applying evidence-based reasoning, and delivering results that are impressive and epistemically useful to someone previously only exposed to median do-goodingâ - or at least conveying information, or embodying a style of reasoning about the world, that is important for such endeavours.
To give a random example, I might be fairly excited about assigning Steven Pinkerâs The Better Angels of Our Nature as reading in high school, even though I think some claims in this book are false and that there are important omissions.
To give a different example (and one we have discussed before), Iâm fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically donât care about them in this context.
Yet another example: Iâm fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
(Though in some other ways I might be more pessimistic /â might have a higher bar for such content. E.g., I might care more about content being well written and engaging.)
I think my underlying inside-view model here is roughly:
One of the highest-leverage effects we can have on people is to âdislodgeâ them from a state of complacency or fatalism about their ability to make a big positive contribution to the world.
To achieve this effect, it is often sufficient to expose people to examples of other people seriously trying to make a big positive contribution to the world while being guided by roughly the ârightâ methods (e.g. scientific mindset), and are doing so in a way that seems impressive to the person exposed.
It is helpful if these efforts are âsuccessfulâ by generic lights, e.g., produce well-received output.
It doesnât matter that much if a âcore EAâ would think that, all things considered, these efforts are worthless or net negative because theyâre in the wrong cause area or miss some crucial consideration or whatever.
How can we structure the EA community in such a way that it can âabsorbâ very large numbers of people while also improving the allocation of talent or other resources?
I am personally quite unsatisfied with many discussions and standard arguments around âhow much should EA grow?â etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be âgrow more slowlyâ or âhave a community of uniformly extremely high capability levelsâ but instead: âstructure the community in such a way that selection/âscreening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.â.
ETA: Upon rereading, I worry that the above can be construed as being too indiscriminately negative about discussions on and efforts in EA community building. I think Iâm mainly reporting my immediate reaction to a diffuse âvibeâ I get from some conversations I remember, not to specific current efforts by people thinking and working on community building strategy full-time (I think often I simply donât have a great understanding of these peopleâs views).
I find it instructive to compare the EA community to pure maths academia, and to large political parties.
Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general population. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs whoâll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or ⌠And while perhaps in a sense they put âtoo littleâ weight on this, I also think they donât need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isnât for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps âmediocreâ) books on advanced mathematics: âIn 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carrâs collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]â
Iâm not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a âwide netâ that exposes very large numbers of people to mathematics and an opportunity to ârise through its ranksâ?
Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an âelitistâ approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
So it seems that even though these other âcommunitiesâ also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly âelitistâ. Why is that? Maybe there are differences between these communities that mean their approaches canât work for EA.
E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too âpreparadigmaticâ to allow for something like that.
Perhaps the key difference for political parties is that they have higher demand for ânon-eliteâ talentâe.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize oneâs views to large fractions of the general population.
But is that it? And is it all? Iâm worried that we gave up too early, and that if we tried harder weâd find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesnât seem great anyway) within the community, despite these structural challenges.
How large are the returns on expected lifetime impact as we move someone from âhasnât heard of EA at allâ toward âis maximally dedicated and believes all kinds of highly specific EA claims including about, e.g., top cause areas or career priority pathsâ?
E.g., very crudely, suppose I can either cause N people to move from â1% EAâ to â10% EAâ or 1 person from â50% EAâ to â80% EAâ. For which value of N should I be indifferent?
This is of course oversimplifiedâthere isnât a single dimension of EA-ness. I still feel that questions roughly like this one come up relatively often.
A related question roughly is: if we can only transmit, say, 10% of the âfull EA packageâ, what are the most valuable 10%? A pitch for AMF? The Astronomical Waste argument? Basic reasons for why to care about effectiveness when doing good? The basic case for worrying about AI risk? Etc.
Note that it could turn out that moving people âtoo farâ can be badâe.g., if common EA claims about top cause areas or careers were wrong, and we were transmitting only current âanswersâ to people without giving them the ability to update these answers when it would be appropriate.
Should we fund people or projects? I.e., to what extent should we provide funding that is ârestrictedâ to specific projects or plans versus, at least for some people, give them funding to do whatever they want? If the latter, what are the criteria for identifying people for whom this is viable?
This is of course a spectrum, and the literal extreme of âgive A money to do whatever they wantâ will very rarely seem correct to me.
It seems to me that Iâm more willing than some others to move more toward âfunding peopleâ, and that when evaluating both people and projects I care less about current âEA alignmentâ and less about the direct, immediate impactâand more about things like âwill providing funding to do X cause the grantee to engage with interesting ideas and make valuable learning experiencesâ.
How can we move both individual grantees as well as the community as a whole more toward an âabundance mindsetâ as opposed to âscarcity mindsetâ?
This is a pretty complex topic, and an area that is delicate to navigate. As EA has perhaps witnessed in the past, naive ways of trying to encourage an âabundance mindsetâ can lead to a mismatch of expectations (e.g., people expecting that the bar for getting funding is lower than it in fact is), negative effects from poorly implemented or badly coordinated new projects, etc. - I also think there are other reasons for caution against âbeing too cavalier with moneyâ, e.g., it can lead to a lack of accountability.
Nevertheless, I think it would be good if more EAs internalized just how much total funding/âcapital there would be available if only we could find robustly good ways to deploy it at large scale. I donât have a great solution, and my thoughts on the general issue are in flux, but, e.g., I personally tentatively think that on the margin we should be more willing to provide larger upfront amounts of funding to people who seem highly capable and want to start ambitious projects.
I think traditional ânonprofit cultureâ unfortunately is extremely unhelpful here b/âc it encourages risk aversion, excessive weight on saving money, etc. - Similarly, it is probably not helpful that a lot of EAs happen to be students or are otherwise have mostly experienced money being a relatively scarce resource in their personal lives.
Your points about âHow can we structure the EA community in such a way that it can âabsorbâ very large numbers of people while also improving the allocation of talent or other resources?â are perhaps particularly thought-provoking for me. I think I find your points less convincing/âsubstantive than you do, but I hadnât thought about them before and I think they do warrant more thought/âdiscussion/âresearch.
On this, readers may find the value of movement growth entry/âtag interesting. (Iâve also made a suggestion on the Discussion page for a future editor to try to incorporate parts from your comment into that entry.)
Here are some quick gestures at the reasons why I think Iâm less convinced by your points than you. But I donât actually know my overall stance on how quickly, how large, and simply how the EA movement should grow. And I expect youâve considered things like this alreadyâthis is maybe more for the readers benefit, or something.
As you say, âperhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too âpreparadigmaticâ to allow for something like that.â
I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
E.g., if we threw 2000 additional randomly chosen people into an EA conference, it would no longer make sense for me to spend lots of time having indiscriminate 1-1 chats where I give career advice (currently I spend a fair amount of time doing things like this, which reduces how much time I have for other useful things). Iâd either have to stop doing that or find some way of âscreeningâ people for it, which could impose costs and awkwardness on both parties
Currently we have the option of either growing more, faster, or differently in future, or not doing so. But certain growth strategies/âoutcomes would be hard-to-reverse, which would destroy option value
You say âIâm worried that we gave up too earlyâ, but I donât think weâve come to a final stance on how, how fast, and how large the movement should grow, weâre just not now pushing for certain types or speeds of growth
We can push for it later
(Of course, there are also various costs to delaying our growth)
I mean Iâm not sure how convinced I am by my points either. :) I think I mainly have a reaction of âsome discussions Iâve seen seem kind of off, rely on flawed assumptions or false dichotomies, etc.ââbut even if thatâs right, I feel way less sure what the best conclusion is.
One quick reply:
I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
I think the âparticularly if it grows in certain waysâ is the key part here, and that basically we should talk 90% about how to grow and 10% about how much to grow.
I think one of my complaints is precisely that some discussions seem to construe suggestions of growing faster, or aiming for a larger community, as implying âadding 2,000 random people to EAGâ. But to me this seems to be a bizarre strawman. If you add 2,000 random people to a maths conference, or drop them into a maths lecture, it will be a disaster as well!
I think the key question is not âwhat if we make everything we have bigger?â but âcan we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?â.
A somewhat grandiose analogy: Suppose that at the dawn of the agricultural revolution youâre a central planner tasked with maximizing the human population. You realize that by introducing agriculture, much larger populations could be supported as far as the food supply goes. But then you realize that if you imagine larger population densities and group sizes while leaving everything else fixed, various things will breakâe.g., kinship-based conflict resolution mechanisms will become infeasible. What should you do? You shouldnât conclude that, unfortunately, the population canât grow. You should think about division of labor, institutions, laws, taxes, cities, and the state.
FWIW, I used âif we threw 2000 additional randomly chosen people into an EA conferenceâ as an example precisely because itâs particularly easy to explain/âsee the issue in that case. I agree that many other cases wouldnât just be clearly problematic, and thus I avoided them when wanting a quick example. And I can now see how that example therefore seems straw-man-y.)
Minor point: I think it may have been slightly better to make a separate comment for each of your top-level bullet points, since they are each fairly distinct, fairly substantive, and could warrant specific replies.
[The following comment is a tangent/ânit-pick, and doesnât detract from your actual overall point.]
Yet another example: Iâm fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
I agree that that sort of content seems useful, and also that âfor most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now)â. But I think the âeven thoughâ doesnât quite make sense: I think part of the target audience for at least the Tomasik article was probably also people who might use their donations or careers to reduce animal suffering. And thatâs more plausibly the best way for them to help farmed animals now, and such people would also benefit from analyses of the contributions of different animal products to animal suffering.
(But Iâd guess that that would be less true for Galefâs article, due to having a less targeted audience. That said, I havenât actually read either of these specific articles.)
To give a different example (and one we have discussed before), Iâm fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically donât care about them in this context.
Personally, I currently feel unsure whether itâd be very positive, somewhat positive, neutral, or somewhat negative for people to be exposed to that piece or pieces like it. But I think this just pushes in favour of your overall point that âWhat products and cluster of ideas work as âstepping stonesâ or âgatewaysâ toward (full-blown) EA [or similarly âimpactfulâ mindsets]?â is a key uncertainty and that more clarity on that would be useful.
(I should also note than in general I think Kelseyâs work has remarkably good quality, especially considering the pace sheâs producing things at, and Iâm very glad sheâs doing the work sheâs doing.)
What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? Iâve been pretty surprised how much traction âEAâ as an overall concept has gotten. Whereas Iâve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities.
How much and in which cases research is translated into action. I have some hypothesis that itâs often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think itâs difficult to bring these together. I think maybe that inclines me towards funding more âgetting things doneâ and less research than I might naturally be tempted to. (Though Iâm probably pretty far on the âdo more researchâ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action.
How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, itâs really hard to make it in academia, and there are strong incentives in place there, which donât point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently the fund managers have somewhat different intuitions on this question.
Interesting, thanks. (And all the other answers here have been really interesting too!)
What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)?
Is what you have in mind the sort of thing the âawareness-inclination modelâ in How valuable is movement growth? was aiming to get at? Like further theorising and (especially?) empirical research along the lines of that model, making breaking things down further into particular bundles of EA ideas, particular populations, particular ways of introducing the ideas, etc.?
Some related questions with slightly different framings:
What crucial considerations and/âor key uncertainties do you think the EAIF fund operates under?
What types/âlines of research do you expect would be particularly useful for informing the EAIFâs funding decisions?
Do you have thoughts on what types/âlines of research would be particularly useful for informing other fundersâ funding decisions in the âEA infrastructureâ space?
Do you have thoughts on how the answers to questions 2 and 3 might differ?
Some key uncertainties for me are:
What products and clusters of ideas work as âstepping stonesâ or âgatewaysâ toward (full-blown) EA [or similarly âimpactfulâ mindsets]?
By this I roughly mean: for various products X (e.g., a website providing charity evaluations, or a book, or âŚ), how does the unconditional probability P(A takes highly valuable EA-ish actions within their next few years) compare to the conditional probability P(A takes highly valuable EA-ish actions within their next few years | A now encounters X)?
I weakly suspect that me having different views on this than other fund managers was perhaps the largest source of significant disagreements with others.
It tentatively seems to me that Iâm unusually optimistic about the range of products that work as stepping stones in this sense. That is, I worry less if products X are extremely high-quality or accurate in all respects, or agree with typical EA views or motivations in all respects. Instead, Iâm more excited about increasing the reach of a wider range of products X that meet a high but different bar of roughly âtaking the goal of effectively improving the world seriously by making a sincere effort to improve on median do-gooding by applying evidence-based reasoning, and delivering results that are impressive and epistemically useful to someone previously only exposed to median do-goodingâ - or at least conveying information, or embodying a style of reasoning about the world, that is important for such endeavours.
To give a random example, I might be fairly excited about assigning Steven Pinkerâs The Better Angels of Our Nature as reading in high school, even though I think some claims in this book are false and that there are important omissions.
To give a different example (and one we have discussed before), Iâm fairly optimistic about the impact of journalistic pieces like this one, and would be very excited if more people were exposed to it. From an in-the-weeds research perspective, this article has a number of problems, but I basically donât care about them in this context.
Yet another example: Iâm fairly glad that we have content on the contributions of different animal products to animal suffering (e.g. this or this) even though I think that for most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now).
(Though in some other ways I might be more pessimistic /â might have a higher bar for such content. E.g., I might care more about content being well written and engaging.)
I think my underlying inside-view model here is roughly:
One of the highest-leverage effects we can have on people is to âdislodgeâ them from a state of complacency or fatalism about their ability to make a big positive contribution to the world.
To achieve this effect, it is often sufficient to expose people to examples of other people seriously trying to make a big positive contribution to the world while being guided by roughly the ârightâ methods (e.g. scientific mindset), and are doing so in a way that seems impressive to the person exposed.
It is helpful if these efforts are âsuccessfulâ by generic lights, e.g., produce well-received output.
It doesnât matter that much if a âcore EAâ would think that, all things considered, these efforts are worthless or net negative because theyâre in the wrong cause area or miss some crucial consideration or whatever.
How can we structure the EA community in such a way that it can âabsorbâ very large numbers of people while also improving the allocation of talent or other resources?
I am personally quite unsatisfied with many discussions and standard arguments around âhow much should EA grow?â etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be âgrow more slowlyâ or âhave a community of uniformly extremely high capability levelsâ but instead: âstructure the community in such a way that selection/âscreening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.â.
ETA: Upon rereading, I worry that the above can be construed as being too indiscriminately negative about discussions on and efforts in EA community building. I think Iâm mainly reporting my immediate reaction to a diffuse âvibeâ I get from some conversations I remember, not to specific current efforts by people thinking and working on community building strategy full-time (I think often I simply donât have a great understanding of these peopleâs views).
I find it instructive to compare the EA community to pure maths academia, and to large political parties.
Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general population. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs whoâll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or ⌠And while perhaps in a sense they put âtoo littleâ weight on this, I also think they donât need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isnât for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps âmediocreâ) books on advanced mathematics: âIn 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carrâs collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]â
Iâm not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a âwide netâ that exposes very large numbers of people to mathematics and an opportunity to ârise through its ranksâ?
Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an âelitistâ approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
So it seems that even though these other âcommunitiesâ also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly âelitistâ. Why is that? Maybe there are differences between these communities that mean their approaches canât work for EA.
E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too âpreparadigmaticâ to allow for something like that.
Perhaps the key difference for political parties is that they have higher demand for ânon-eliteâ talentâe.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize oneâs views to large fractions of the general population.
But is that it? And is it all? Iâm worried that we gave up too early, and that if we tried harder weâd find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesnât seem great anyway) within the community, despite these structural challenges.
How large are the returns on expected lifetime impact as we move someone from âhasnât heard of EA at allâ toward âis maximally dedicated and believes all kinds of highly specific EA claims including about, e.g., top cause areas or career priority pathsâ?
E.g., very crudely, suppose I can either cause N people to move from â1% EAâ to â10% EAâ or 1 person from â50% EAâ to â80% EAâ. For which value of N should I be indifferent?
This is of course oversimplifiedâthere isnât a single dimension of EA-ness. I still feel that questions roughly like this one come up relatively often.
A related question roughly is: if we can only transmit, say, 10% of the âfull EA packageâ, what are the most valuable 10%? A pitch for AMF? The Astronomical Waste argument? Basic reasons for why to care about effectiveness when doing good? The basic case for worrying about AI risk? Etc.
Note that it could turn out that moving people âtoo farâ can be badâe.g., if common EA claims about top cause areas or careers were wrong, and we were transmitting only current âanswersâ to people without giving them the ability to update these answers when it would be appropriate.
Should we fund people or projects? I.e., to what extent should we provide funding that is ârestrictedâ to specific projects or plans versus, at least for some people, give them funding to do whatever they want? If the latter, what are the criteria for identifying people for whom this is viable?
This is of course a spectrum, and the literal extreme of âgive A money to do whatever they wantâ will very rarely seem correct to me.
It seems to me that Iâm more willing than some others to move more toward âfunding peopleâ, and that when evaluating both people and projects I care less about current âEA alignmentâ and less about the direct, immediate impactâand more about things like âwill providing funding to do X cause the grantee to engage with interesting ideas and make valuable learning experiencesâ.
How can we move both individual grantees as well as the community as a whole more toward an âabundance mindsetâ as opposed to âscarcity mindsetâ?
This is a pretty complex topic, and an area that is delicate to navigate. As EA has perhaps witnessed in the past, naive ways of trying to encourage an âabundance mindsetâ can lead to a mismatch of expectations (e.g., people expecting that the bar for getting funding is lower than it in fact is), negative effects from poorly implemented or badly coordinated new projects, etc. - I also think there are other reasons for caution against âbeing too cavalier with moneyâ, e.g., it can lead to a lack of accountability.
Nevertheless, I think it would be good if more EAs internalized just how much total funding/âcapital there would be available if only we could find robustly good ways to deploy it at large scale. I donât have a great solution, and my thoughts on the general issue are in flux, but, e.g., I personally tentatively think that on the margin we should be more willing to provide larger upfront amounts of funding to people who seem highly capable and want to start ambitious projects.
I think traditional ânonprofit cultureâ unfortunately is extremely unhelpful here b/âc it encourages risk aversion, excessive weight on saving money, etc. - Similarly, it is probably not helpful that a lot of EAs happen to be students or are otherwise have mostly experienced money being a relatively scarce resource in their personal lives.
Your points about âHow can we structure the EA community in such a way that it can âabsorbâ very large numbers of people while also improving the allocation of talent or other resources?â are perhaps particularly thought-provoking for me. I think I find your points less convincing/âsubstantive than you do, but I hadnât thought about them before and I think they do warrant more thought/âdiscussion/âresearch.
On this, readers may find the value of movement growth entry/âtag interesting. (Iâve also made a suggestion on the Discussion page for a future editor to try to incorporate parts from your comment into that entry.)
Here are some quick gestures at the reasons why I think Iâm less convinced by your points than you. But I donât actually know my overall stance on how quickly, how large, and simply how the EA movement should grow. And I expect youâve considered things like this alreadyâthis is maybe more for the readers benefit, or something.
As you say, âperhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too âpreparadigmaticâ to allow for something like that.â
I think we currently have a quite remarkable degree of trust, helpfulness, and coordination. I think that that becomes harder as a movement grows, and particularly if it grows in certain ways
E.g., if we threw 2000 additional randomly chosen people into an EA conference, it would no longer make sense for me to spend lots of time having indiscriminate 1-1 chats where I give career advice (currently I spend a fair amount of time doing things like this, which reduces how much time I have for other useful things). Iâd either have to stop doing that or find some way of âscreeningâ people for it, which could impose costs and awkwardness on both parties
Currently we have the option of either growing more, faster, or differently in future, or not doing so. But certain growth strategies/âoutcomes would be hard-to-reverse, which would destroy option value
You say âIâm worried that we gave up too earlyâ, but I donât think weâve come to a final stance on how, how fast, and how large the movement should grow, weâre just not now pushing for certain types or speeds of growth
We can push for it later
(Of course, there are also various costs to delaying our growth)
I mean Iâm not sure how convinced I am by my points either. :) I think I mainly have a reaction of âsome discussions Iâve seen seem kind of off, rely on flawed assumptions or false dichotomies, etc.ââbut even if thatâs right, I feel way less sure what the best conclusion is.
One quick reply:
I think the âparticularly if it grows in certain waysâ is the key part here, and that basically we should talk 90% about how to grow and 10% about how much to grow.
I think one of my complaints is precisely that some discussions seem to construe suggestions of growing faster, or aiming for a larger community, as implying âadding 2,000 random people to EAGâ. But to me this seems to be a bizarre strawman. If you add 2,000 random people to a maths conference, or drop them into a maths lecture, it will be a disaster as well!
I think the key question is not âwhat if we make everything we have bigger?â but âcan we build a structure that allows separation between, and controlled flow of talent and other resources, different subcommunities?â.
A somewhat grandiose analogy: Suppose that at the dawn of the agricultural revolution youâre a central planner tasked with maximizing the human population. You realize that by introducing agriculture, much larger populations could be supported as far as the food supply goes. But then you realize that if you imagine larger population densities and group sizes while leaving everything else fixed, various things will breakâe.g., kinship-based conflict resolution mechanisms will become infeasible. What should you do? You shouldnât conclude that, unfortunately, the population canât grow. You should think about division of labor, institutions, laws, taxes, cities, and the state.
(Yeah, this seems reasonable.
FWIW, I used âif we threw 2000 additional randomly chosen people into an EA conferenceâ as an example precisely because itâs particularly easy to explain/âsee the issue in that case. I agree that many other cases wouldnât just be clearly problematic, and thus I avoided them when wanting a quick example. And I can now see how that example therefore seems straw-man-y.)
Interesting discussion. What if there was a separate brand for a mass movement version of EA?
Thanks! This is really interesting.
Minor point: I think it may have been slightly better to make a separate comment for each of your top-level bullet points, since they are each fairly distinct, fairly substantive, and could warrant specific replies.
[The following comment is a tangent/ânit-pick, and doesnât detract from your actual overall point.]
I agree that that sort of content seems useful, and also that âfor most people changing their diet is not among the most effective things they can do to improve the world (or even help farmed animals now)â. But I think the âeven thoughâ doesnât quite make sense: I think part of the target audience for at least the Tomasik article was probably also people who might use their donations or careers to reduce animal suffering. And thatâs more plausibly the best way for them to help farmed animals now, and such people would also benefit from analyses of the contributions of different animal products to animal suffering.
(But Iâd guess that that would be less true for Galefâs article, due to having a less targeted audience. That said, I havenât actually read either of these specific articles.)
(Ah yeah, good point. I agree that the âeven thoughâ is a bit off because of the things you say.)
In case any readers are interested, they can see my thoughts on that piece here: Quick thoughts on Kelsey Piperâs article âIs climate change an âexistential threatâ â or just a catastrophic one?â
Personally, I currently feel unsure whether itâd be very positive, somewhat positive, neutral, or somewhat negative for people to be exposed to that piece or pieces like it. But I think this just pushes in favour of your overall point that âWhat products and cluster of ideas work as âstepping stonesâ or âgatewaysâ toward (full-blown) EA [or similarly âimpactfulâ mindsets]?â is a key uncertainty and that more clarity on that would be useful.
(I should also note than in general I think Kelseyâs work has remarkably good quality, especially considering the pace sheâs producing things at, and Iâm very glad sheâs doing the work sheâs doing.)
Here are a few things:
What proportion of the general population might fully buy in to EA principles if they came across them in the right way, and what proportion of people might buy in to some limited version (eg become happy to donate to evidence backed global poverty interventions)? Iâve been pretty surprised how much traction âEAâ as an overall concept has gotten. Whereas Iâve maybe been negatively surprised by some limited version of EA not getting more traction than it has. These questions would influence how excited I am about wide outreach, and about how much I think it should be optimising for transmitting a large number of ideas vs simply giving people an easy way to donate to great global development charities.
How much and in which cases research is translated into action. I have some hypothesis that itâs often pretty hard to translate research into action. Even in cases where someone is deliberating between actions and someone else in another corner of the community is researching a relevant consideration, I think itâs difficult to bring these together. I think maybe that inclines me towards funding more âgetting things doneâ and less research than I might naturally be tempted to. (Though Iâm probably pretty far on the âdo more researchâ side to start with.) It also inclines me to fund things that might seem like good candidates for translating research into action.
How useful influencing academia is. On the one hand, there are a huge number of smart people in academia, who would like to spend their careers finding out the truth. Influencing them towards prioritising research based on impact seems like it could be really fruitful. On the other hand, itâs really hard to make it in academia, and there are strong incentives in place there, which donât point towards impact. So maybe it would be more impactful for us to encourage people who want to do impactful work to leave academia and be able to focus their research purely on impact. Currently the fund managers have somewhat different intuitions on this question.
Interesting, thanks. (And all the other answers here have been really interesting too!)
Is what you have in mind the sort of thing the âawareness-inclination modelâ in How valuable is movement growth? was aiming to get at? Like further theorising and (especially?) empirical research along the lines of that model, making breaking things down further into particular bundles of EA ideas, particular populations, particular ways of introducing the ideas, etc.?