I think this is very good and highlights a good point: that reaching people outside of EA is crucial to achieving much of what we want to achieve; and we don’t need those people to become “EAs” for it be valuable.
It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there’s a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
My guess is that quality matters more in AI safety because I think the talent necessary to have a big positive impact is high. AI safety impact also seems long-tailed.
It’s not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there’s a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
I suppose this mostly has to do with growing the size of the “EA community”, whereas I’m mostly thinking about growing the size of “people doing effectively altruistic things”. There’s a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn’t need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn’t mean that someone like SBF can’t pledge 99%.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
Yup, agreed. This is my model as well. That being said, I wouldn’t be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
It’s not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
Agreed. I’m similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.
I would love to see other more targeted and ambitious efforts to influence others where the KPI isn’t the number of highly-engaged EAs created.
+1, EA is a philosophical movement as well as a professional and social community.
I agree with this post that it can be useful to spread the philosophical ideas to people who will never be a part of the professional and social community. My sense from talking to, for example, senior professionals who have been convinced to reallocate some of their work to EA-priority causes is that this can be extremely valuable. Or, I’ve heard some people say they value a highly-engaged EA far more than a semi-engaged person, but I think they are probably underweighting the value of mid-to-senior people who do not become full-blown community members but nevertheless are influenced put some of their substantial network and career capital towards important problems.
On a separate note, I perceive an extremely high overlap between the “professional” and “social” for the highly-engaged EA crowd. For example, my sense is that it’s fairly hard to get accepted to EA Global if your main EA activity is donating a large portion of your objectively-high-but-not-multimillionaire-level tech salary”, i.e. you must be a part of the professional community to get access to the social community. I think it would be good to [create more social spaces for EA non-dedicates](https://forum.effectivealtruism.org/posts/aYifx3zd5R5N8bQ6o/ea-dedicates).
When I got involved in kick-starting our local student chapter, I noticed most of our ideas initially drifted to some form of influencing, but we ended up “correcting” that to what has become an internal motto: quality over quantity. While I still think it’s a good initial strategy for a student chapter, your argument did make me think about missed opportunities in influence.
For example, I was recently offered the opportunity to help build the syllabus for an Ethics in Computer Science course, as well as helping create social responsibility modules for an Intro to Econ course. My initial reaction was to prioritize the student chapter, but I can now see a potential opportunity to align both.
I think you’re right on the premise that there’s a way to influence people that simultaneously doesn’t run the risk of value drift or unintentionally misrepresenting the EA community to the world; this is probably more in line with traditional education, campaigning, lobbying, and activism. In my (limited) experience in the community, there seem to be many low-hanging fruits in this regard, but there have been advances in this direction, as yesterday’s post on the Social Change Lab seems to show.
But if we can influence these individuals and/or institutions to do things like thinking about scale, neglectedness, and tractability when deciding on what to focus on, that could be extremely valuable
100%. And get people
voting for,
donating to, and
allocating resources and attention in their organizations and groups
… to causes and interventions that are justifies by these criteria (e.g., farmed animal welfare).
I suspect that intentional effective donations are a good proxy metric for the other activities with more difficult feedback loops.
I agree with this entirely. I submitted a post in which I speak to this very idea, (not as clearly and pointedly as you have done):
“What I see missing, is promotion of the universal benefits of equality, altruism, and goodwill. Here I mean simple altruism, not necessarily effective altruism. Imagine if only 20% of the population worked for the greater good. Or if every person spent 20% of their time at it? Convincing more of the world population to do right by each other, the environment, animals, and the future, in whatever capacity possible, seems to me to be the best investment the EA community could make. Working at a local soup kitchen may not be the most effective/efficient altruistic pursuit, but what if everyone did something similar, and maximized their personal fit? I have trouble thinking of a downside, but am open to counterpoint ideas. ”
I am a mid-career professional, who only discovered EA a year ago, FWIW.
I think that the value is going to vary hugely by the cause area and the exact ask.
For global poverty, anyone can donate money to buy malaria net, though it’s worth remembering that Dustin Moskovitz is worth a crazy number of low-value donors.
For AI Safety, it’s actually surprisingly tricky to find robustly net-positive actions we can pursue. Unfortunately it would be very easy to lobby a politician to pass legislation, which then makes the situation worse. Or to persuade voters this is an important issue, but then have them voting for things that sound good rather than things that solve the issue.
So I suspect that the value of producing more highly-enaged people actually stacks up better than many people think.
On the other hand, I agree with the shift towards engaging more with the public, which seems necessary at this stage if we don’t want to be defined by our critics.
I think that the value is going to vary hugely by the cause area and the exact ask.
For global poverty, anyone can donate money to buy malaria net, though it’s worth remembering that Dustin Moskovitz is worth a crazy number of low-value donors.
For AI Safety, it’s actually surprisingly tricky to find robustly net-positive actions we can pursue. Unfortunately it would be very easy to lobby a politician to pass legislation, which then makes the situation worse. Or to persuade voters this is an important issue, but then have them voting for things that sound good rather than things that solve the issue.
For global health & development, I think it is still quite useful to have influence over things like research and policy prioritisation (what topics academics should research, and what areas of policy think tanks should focus on), government foreign aid budgets, vaccine r&d, etc. This is tangential, but even if Dustin is worth a large number of low-value donors (he is), the marginal donation to effective global poverty charities is still very impactful.
For AI, I agree that it is tricky to find robustly net-positive actions, as of right now at least. I expect this to change over the next few years, and I hope people in relevant positions to implement these actions will be ready to do so once we have more clarity about which ones are good. Whether or not they’re highly engaged EAs doesn’t seem to matter inasmuch as they actually do the things, IMO.
I really enjoyed reading this. Sharing information is so important when it comes to influencing those levers you mention.
This got me thinking about how this applies to the Alternative Protein Industry, and how it solves for the same problems as the Effective Animal Advocacy movement (even though many in the alt protein space may not know much about EA, so probably not EA-aligned, but can still do a impactful work for the EAA space).
I have wondered before, what good would it do if we flipped all of the past governmental and industry investments in farmed ag research on ‘how to breed for productive animals’ to ‘here’s a shortcut to selecting the best donor animal cells for cellular agriculture, and the best gene variants for fermentation-derived alt proteins’. Of course, there’s foundational considerations like ‘well those variants may not be optimal for proteins grown outside of the animal’ etc., but I think the industry isn’t tapping in to that information that’s already there. I’ve been toying with the idea of drafting a piece on this for open source dissemination (but between procrastination and not really knowing where to start, its still just an idea bouncing around my head). Maybe its the animal geneticist in me that thinks the overlap would be good, whereas someone less immersed in it might think its not so useful?
The model I keep using in my head to think about these things is the Catholic Church. (Maybe not surprising for an organization that encourages tithing.) There is a highly trained priesthood that thinks very hard about how people can live a moral life and then there is the very much larger body of practicing Catholics. A lot of the quality vs quantity arguing that I see is akin to insisting that all Catholics become priests.
This model would argue for less emphasis on building communities of highly-engaged EAs and more on building communities AROUND highly engaged EAs that can guide less-engaged members through the strength of their relationships with these people. I don’t know what the right ratio of “priests” to “practicers” maximizes impact—and really liked Chris Leong’s point about it probably being different for different challenges—but I suspect there’s a pretty steep opportunity cost to not filling those pews.
I think this is very good and highlights a good point: that reaching people outside of EA is crucial to achieving much of what we want to achieve; and we don’t need those people to become “EAs” for it be valuable.
It seems like there is a quality and quantity trade-off where you could grow EA faster by expecting less engagement or commitment. I think there’s a lot of value in thinking about how to make EA massively scale. For example, if we wanted to grow EA to millions of people maybe we could lower the barrier to entry somehow by having a small number of core ideas or advertising low-commitment actions such as earning to give. I think scaling up the number of people massively would benefit the most scalable charities such as GiveDirectly.
The counterargument is that impact per person tends to be long-tailed. For example, the net worth of Sam Bankman Fried is ~100,000 higher than a typical person. Therefore, who is in EA might matter as much or more as how many EAs there are.
My guess is that quality matters more in AI safety because I think the talent necessary to have a big positive impact is high. AI safety impact also seems long-tailed.
It’s not clear to me whether quality or quantity is more important because some of the benefits are hard to quantify. One easily measurable metric is donations: adding a sufficiently large number of average donators should have the same financial value as adding a single billionaire.
I suppose this mostly has to do with growing the size of the “EA community”, whereas I’m mostly thinking about growing the size of “people doing effectively altruistic things”. There’s a big difference in the composition of those groups. I also think there is a trade-off in terms of how community building resources are spent, but the thing about trying to encourage influence is that it doesn’t need to trade-off with highly engaged EAs. One analogy is that encouraging people to donate 10% doesn’t mean that someone like SBF can’t pledge 99%.
Yup, agreed. This is my model as well. That being said, I wouldn’t be surprised if the impact of influence also follows a long-tailed distribution: imagine if we manage to influence 1,000 people about the importance of AI-related x-risk, and one of them actually ends up being the one to push for some highly impactful policy change.
Agreed. I’m similarly fuzzy on this and would really appreciate if someone did more analysis on this rather than deferring to the meme that EA is growing too fast/slow.
+1, EA is a philosophical movement as well as a professional and social community.
I agree with this post that it can be useful to spread the philosophical ideas to people who will never be a part of the professional and social community. My sense from talking to, for example, senior professionals who have been convinced to reallocate some of their work to EA-priority causes is that this can be extremely valuable. Or, I’ve heard some people say they value a highly-engaged EA far more than a semi-engaged person, but I think they are probably underweighting the value of mid-to-senior people who do not become full-blown community members but nevertheless are influenced put some of their substantial network and career capital towards important problems.
On a separate note, I perceive an extremely high overlap between the “professional” and “social” for the highly-engaged EA crowd. For example, my sense is that it’s fairly hard to get accepted to EA Global if your main EA activity is donating a large portion of your objectively-high-but-not-multimillionaire-level tech salary”, i.e. you must be a part of the professional community to get access to the social community. I think it would be good to [create more social spaces for EA non-dedicates](https://forum.effectivealtruism.org/posts/aYifx3zd5R5N8bQ6o/ea-dedicates).
When I got involved in kick-starting our local student chapter, I noticed most of our ideas initially drifted to some form of influencing, but we ended up “correcting” that to what has become an internal motto: quality over quantity. While I still think it’s a good initial strategy for a student chapter, your argument did make me think about missed opportunities in influence.
For example, I was recently offered the opportunity to help build the syllabus for an Ethics in Computer Science course, as well as helping create social responsibility modules for an Intro to Econ course. My initial reaction was to prioritize the student chapter, but I can now see a potential opportunity to align both.
I think you’re right on the premise that there’s a way to influence people that simultaneously doesn’t run the risk of value drift or unintentionally misrepresenting the EA community to the world; this is probably more in line with traditional education, campaigning, lobbying, and activism. In my (limited) experience in the community, there seem to be many low-hanging fruits in this regard, but there have been advances in this direction, as yesterday’s post on the Social Change Lab seems to show.
100%. And get people
voting for,
donating to, and
allocating resources and attention in their organizations and groups
… to causes and interventions that are justifies by these criteria (e.g., farmed animal welfare).
I suspect that intentional effective donations are a good proxy metric for the other activities with more difficult feedback loops.
I agree with this entirely. I submitted a post in which I speak to this very idea, (not as clearly and pointedly as you have done):
“What I see missing, is promotion of the universal benefits of equality, altruism, and goodwill. Here I mean simple altruism, not necessarily effective altruism. Imagine if only 20% of the population worked for the greater good. Or if every person spent 20% of their time at it? Convincing more of the world population to do right by each other, the environment, animals, and the future, in whatever capacity possible, seems to me to be the best investment the EA community could make. Working at a local soup kitchen may not be the most effective/efficient altruistic pursuit, but what if everyone did something similar, and maximized their personal fit? I have trouble thinking of a downside, but am open to counterpoint ideas. ”
I am a mid-career professional, who only discovered EA a year ago, FWIW.
I think that the value is going to vary hugely by the cause area and the exact ask.
For global poverty, anyone can donate money to buy malaria net, though it’s worth remembering that Dustin Moskovitz is worth a crazy number of low-value donors.
For AI Safety, it’s actually surprisingly tricky to find robustly net-positive actions we can pursue. Unfortunately it would be very easy to lobby a politician to pass legislation, which then makes the situation worse. Or to persuade voters this is an important issue, but then have them voting for things that sound good rather than things that solve the issue.
So I suspect that the value of producing more highly-enaged people actually stacks up better than many people think.
On the other hand, I agree with the shift towards engaging more with the public, which seems necessary at this stage if we don’t want to be defined by our critics.
For global health & development, I think it is still quite useful to have influence over things like research and policy prioritisation (what topics academics should research, and what areas of policy think tanks should focus on), government foreign aid budgets, vaccine r&d, etc. This is tangential, but even if Dustin is worth a large number of low-value donors (he is), the marginal donation to effective global poverty charities is still very impactful.
For AI, I agree that it is tricky to find robustly net-positive actions, as of right now at least. I expect this to change over the next few years, and I hope people in relevant positions to implement these actions will be ready to do so once we have more clarity about which ones are good. Whether or not they’re highly engaged EAs doesn’t seem to matter inasmuch as they actually do the things, IMO.
I really enjoyed reading this. Sharing information is so important when it comes to influencing those levers you mention.
This got me thinking about how this applies to the Alternative Protein Industry, and how it solves for the same problems as the Effective Animal Advocacy movement (even though many in the alt protein space may not know much about EA, so probably not EA-aligned, but can still do a impactful work for the EAA space).
I have wondered before, what good would it do if we flipped all of the past governmental and industry investments in farmed ag research on ‘how to breed for productive animals’ to ‘here’s a shortcut to selecting the best donor animal cells for cellular agriculture, and the best gene variants for fermentation-derived alt proteins’. Of course, there’s foundational considerations like ‘well those variants may not be optimal for proteins grown outside of the animal’ etc., but I think the industry isn’t tapping in to that information that’s already there. I’ve been toying with the idea of drafting a piece on this for open source dissemination (but between procrastination and not really knowing where to start, its still just an idea bouncing around my head). Maybe its the animal geneticist in me that thinks the overlap would be good, whereas someone less immersed in it might think its not so useful?
Thoughts welcome!
Yes, I think this is a crucial point. In general, it seems like there are not many cost-effectiveness analysis on community building.
The model I keep using in my head to think about these things is the Catholic Church. (Maybe not surprising for an organization that encourages tithing.) There is a highly trained priesthood that thinks very hard about how people can live a moral life and then there is the very much larger body of practicing Catholics. A lot of the quality vs quantity arguing that I see is akin to insisting that all Catholics become priests.
This model would argue for less emphasis on building communities of highly-engaged EAs and more on building communities AROUND highly engaged EAs that can guide less-engaged members through the strength of their relationships with these people. I don’t know what the right ratio of “priests” to “practicers” maximizes impact—and really liked Chris Leong’s point about it probably being different for different challenges—but I suspect there’s a pretty steep opportunity cost to not filling those pews.
This is a very cool model and I would absolutely be thrilled to see someone write up a post about it!