Meta-EA is most often characterised in terms of discrete units such as dollars and individuals. How many people can we recruit, how much will they donate, how many people can we train to be AI researchers? This approach carries a lot of value, particularly when we wish to craft metrics to evaluate our work. At the same time, sometimes it is better to view Effective Altruism as a system, to look at it holistically.
I believe that the primary goal of meta-EA should be achieving impact through improving EA as a system. Our starting place should be different theories of how we could do this and metrics should come second, as a way of differentiating between different plans of action and testing hypotheses. I’m not suggesting that quantitive facts should be ignored during the hypothesis generation stage, just that we need to understand the hypothesis space before we can choose appropriate metrics, otherwise we may artificially limit the set of theories that we consider.
In particular, we need to recognise that sometimes a system is more than the sum of its parts. Effective Altruism is one such system, since the various parts of the movement tend to make the other parts work more effective.This article will give a brief summary of how effective altruism works as a system. Please note that this discussion will not just include official EA orgs, but some EA aligned orgs as well.
The Effective Altruism Eco-system:
This section divides up the various parts of the EA eco-system by function. You may want to skim over this section if you already have a good understanding of the eco-system, as otherwise you’ll just be reading things that you already know.
Center for Effective Altruism (CEA)/Local Effective Altruism Network (LEAN): Focuses on movement building and guiding the EA movement generally, including writing articles and sending out the newsletter.
Open Philanthropy/Giving What We Can Pledge/Founder’s Pledge/Effective Altruism Funds/Raising for Effective Giving: Provides funding for the causes we support, as well as for the various other orgs here as well. CEA: Funds local groups. Effective Altruism Funds: Provides funding for smaller projects. Givewell Incubation Grants: Supports potential new top charities.
Effective Altruism Global/EAGx: Spreads ideas within the EA movement and provides networking opportunities.
Less Wrong/Center for Applied Rationality/Broader rationalsphere: Provides tools for thinking more clearly (epistemic rationality) and for being more effective (applied rationality).
Local EA groups/SHIC: Recruits people into the movement who donate or who join orgs, as well as developing them as EAs and often providing a social group. In particular, local groups are present at many of the world’s most prestigious universities, including Oxford, Cambridge, Stanford, Yale, Harvard, Princeton, MIT.
EA Bay Area Hub: Big enough to deserve it’s own point. Connects us with/helps us recruit from the tech scene. Brings enough EAs together in the one place that it is likely that people can find other EAs also interested in the same thing.
80,000 hours: Provides career advice, as well as helping effective orgs fill vacancies.
EA Forum/various Facebook groups: Allow the sharing of ideas globally
Global Priorities Institute: New research institute at Oxford broadly examining EA. Does not just perform research, also provides academic credibility. There are a whole of research institutes for specific causes such as Future of Humanity Institute, Center for the Study of Existential Risk, Foundational Research Institute, Wild Animal Suffering Research, ect.
Givewell/Open Philanthropy Project/Animal Charity Evaluators: Charity evaluators for different causes.
Charity Science: Research into potential new top charities and assists people who want to create them.
Interaction Effects:
We can see several ways in which the existence of a broader eco-system makes certain tasks much more worthwhile. For example, suppose you see an idea for an effective charity on Charity Science. You contact them and they provide you with advice and link you up with potential cofounders. Givewell provides you with an incubation grant, which you use to hire some staff who were referred through 80,000 hours so that you can run a pilot. Givewell evaluates you and you become a top charity. Various Giving What We Can members donate to you and OpenPhil provides you with significant support. Given the inherent difficulties of charity entrepreneurship, it would plausibly only take a single part of this pipeline to be missing in order to derail the whole project and make all the other efforts worthless.
There are many other interaction effects as well. For example, it is much more valuable for the Global Priorities Institute to do research if there is a global movement that will try to put the ideas in action. The Founder’s pledge is much more valuable with Give Well and Open Philanthropy existing, since they provide it with research which it can pass on to founders to help them give more effectively. Further 80,000 hours is much more effective when there are meet ups at top universities to refer people for coaching.
Application:
The main purpose of this post is to encourage more people to adopt a more holistic way of looking at Effective Altruism that may lead to further ideas of worthwhile projects. Nonetheless, I do want to make a few suggestions about application:
Once you have a map of the EA ecosystem (as above), you can start thinking of different pipelines: becoming an AI researcher, starting a new charity, obtaining a financial job for earning to give. You can look for gaps in the pipeline and consider whether the gap might be worth filling or whether the cure is worse than the disease.
One of the greatest difficulties is figuring out how we should handle co-ordination within the movement. If we just examine our marginal impact based on the status quo remaining the same, we will be ignoring any improvements in the efficiencies of other components or the effects of new components being added to the system. In particular, some components may be incredibly valuable if all of them exist, but have minimal value on their own. For example, Givewell can increase donor’s effectiveness by a factor of ten, but in a world where either nobody had heard of them or nobody listened to them, this component would not be valuable by itself. This is not an easy problem and I don’t really know how to address this, but it is plausible that all of the highest impacts come from combinations of components which each increase their effectiveness.
Viewing Effective Altruism as a System
Meta-EA is most often characterised in terms of discrete units such as dollars and individuals. How many people can we recruit, how much will they donate, how many people can we train to be AI researchers? This approach carries a lot of value, particularly when we wish to craft metrics to evaluate our work. At the same time, sometimes it is better to view Effective Altruism as a system, to look at it holistically.
I believe that the primary goal of meta-EA should be achieving impact through improving EA as a system. Our starting place should be different theories of how we could do this and metrics should come second, as a way of differentiating between different plans of action and testing hypotheses. I’m not suggesting that quantitive facts should be ignored during the hypothesis generation stage, just that we need to understand the hypothesis space before we can choose appropriate metrics, otherwise we may artificially limit the set of theories that we consider.
In particular, we need to recognise that sometimes a system is more than the sum of its parts. Effective Altruism is one such system, since the various parts of the movement tend to make the other parts work more effective. This article will give a brief summary of how effective altruism works as a system. Please note that this discussion will not just include official EA orgs, but some EA aligned orgs as well.
The Effective Altruism Eco-system:
This section divides up the various parts of the EA eco-system by function. You may want to skim over this section if you already have a good understanding of the eco-system, as otherwise you’ll just be reading things that you already know.
Center for Effective Altruism (CEA)/Local Effective Altruism Network (LEAN): Focuses on movement building and guiding the EA movement generally, including writing articles and sending out the newsletter.
Open Philanthropy/Giving What We Can Pledge/Founder’s Pledge/Effective Altruism Funds/Raising for Effective Giving: Provides funding for the causes we support, as well as for the various other orgs here as well. CEA: Funds local groups. Effective Altruism Funds: Provides funding for smaller projects. Givewell Incubation Grants: Supports potential new top charities.
Effective Altruism Global/EAGx: Spreads ideas within the EA movement and provides networking opportunities.
Less Wrong/Center for Applied Rationality/Broader rationalsphere: Provides tools for thinking more clearly (epistemic rationality) and for being more effective (applied rationality).
Local EA groups/SHIC: Recruits people into the movement who donate or who join orgs, as well as developing them as EAs and often providing a social group. In particular, local groups are present at many of the world’s most prestigious universities, including Oxford, Cambridge, Stanford, Yale, Harvard, Princeton, MIT.
EA Bay Area Hub: Big enough to deserve it’s own point. Connects us with/helps us recruit from the tech scene. Brings enough EAs together in the one place that it is likely that people can find other EAs also interested in the same thing.
80,000 hours: Provides career advice, as well as helping effective orgs fill vacancies.
EA Forum/various Facebook groups: Allow the sharing of ideas globally
Global Priorities Institute: New research institute at Oxford broadly examining EA. Does not just perform research, also provides academic credibility. There are a whole of research institutes for specific causes such as Future of Humanity Institute, Center for the Study of Existential Risk, Foundational Research Institute, Wild Animal Suffering Research, ect.
Givewell/Open Philanthropy Project/Animal Charity Evaluators: Charity evaluators for different causes.
Charity Science: Research into potential new top charities and assists people who want to create them.
Interaction Effects:
We can see several ways in which the existence of a broader eco-system makes certain tasks much more worthwhile. For example, suppose you see an idea for an effective charity on Charity Science. You contact them and they provide you with advice and link you up with potential cofounders. Givewell provides you with an incubation grant, which you use to hire some staff who were referred through 80,000 hours so that you can run a pilot. Givewell evaluates you and you become a top charity. Various Giving What We Can members donate to you and OpenPhil provides you with significant support. Given the inherent difficulties of charity entrepreneurship, it would plausibly only take a single part of this pipeline to be missing in order to derail the whole project and make all the other efforts worthless.
There are many other interaction effects as well. For example, it is much more valuable for the Global Priorities Institute to do research if there is a global movement that will try to put the ideas in action. The Founder’s pledge is much more valuable with Give Well and Open Philanthropy existing, since they provide it with research which it can pass on to founders to help them give more effectively. Further 80,000 hours is much more effective when there are meet ups at top universities to refer people for coaching.
Application:
The main purpose of this post is to encourage more people to adopt a more holistic way of looking at Effective Altruism that may lead to further ideas of worthwhile projects. Nonetheless, I do want to make a few suggestions about application:
Once you have a map of the EA ecosystem (as above), you can start thinking of different pipelines: becoming an AI researcher, starting a new charity, obtaining a financial job for earning to give. You can look for gaps in the pipeline and consider whether the gap might be worth filling or whether the cure is worse than the disease.
One of the greatest difficulties is figuring out how we should handle co-ordination within the movement. If we just examine our marginal impact based on the status quo remaining the same, we will be ignoring any improvements in the efficiencies of other components or the effects of new components being added to the system. In particular, some components may be incredibly valuable if all of them exist, but have minimal value on their own. For example, Givewell can increase donor’s effectiveness by a factor of ten, but in a world where either nobody had heard of them or nobody listened to them, this component would not be valuable by itself. This is not an easy problem and I don’t really know how to address this, but it is plausible that all of the highest impacts come from combinations of components which each increase their effectiveness.